More stories

  • in

    Q&A: Climate Grand Challenges finalists on new pathways to decarbonizing industry

    Note: This is the third article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalist teams, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    The industrial sector is the backbone of today’s global economy, yet its activities are among the most energy-intensive and the toughest to decarbonize. Efforts to reach net-zero targets and avert runaway climate change will not succeed without new solutions for replacing sources of carbon emissions with low-carbon alternatives and developing scalable nonemitting applications of hydrocarbons.

    In conversations prepared for MIT News, faculty from three of the teams with projects in the competition’s “Decarbonizing complex industries and processes” category discuss strategies for achieving impact in hard-to-abate sectors, from long-distance transportation and building construction to textile manufacturing and chemical refining. The other Climate Grand Challenges research themes include using data and science to forecast climate-related risk, building equity and fairness into climate solutions, and removing, managing, and storing greenhouse gases. The following responses have been edited for length and clarity.

    Moving toward an all-carbon material approach to building

    Faced with the prospect of building stock doubling globally by 2050, there is a great need for sustainable alternatives to conventional mineral- and metal-based construction materials. Mark Goulthorpe, associate professor in the Department of Architecture, explains the methods behind Carbon >Building, an initiative to develop energy-efficient building materials by reorienting hydrocarbons from current use as fuels to environmentally benign products, creating an entirely new genre of lightweight, all-carbon buildings that could actually drive decarbonization.

    Q: What are all-carbon buildings and how can they help mitigate climate change?

    A: Instead of burning hydrocarbons as fuel, which releases carbon dioxide and other greenhouse gases that contribute to atmospheric pollution, we seek to pioneer a process that uses carbon materially to build at macro scale. New forms of carbon — carbon nanotube, carbon foam, etc. — offer salient properties for building that might effectively displace the current material paradigm. Only hydrocarbons offer sufficient scale to beat out the billion-ton mineral and metal markets, and their perilous impact. Carbon nanotube from methane pyrolysis is of special interest, as it offers hydrogen as a byproduct.

    Q: How will society benefit from the widespread use of all-carbon buildings?

    A: We anticipate reducing costs and timelines in carbon composite buildings, while increasing quality, longevity, and performance, and diminishing environmental impact. Affordability of buildings is a growing problem in all global markets as the cost of labor and logistics in multimaterial assemblies creates a burden that is very detrimental to economic growth and results in overcrowding and urban blight.

    Alleviating these challenges would have huge societal benefits, especially for those in lower income brackets who cannot afford housing, but the biggest benefit would be in drastically reducing the environmental footprint of typical buildings, which account for nearly 40 percent of global energy consumption.

    An all-carbon building sector will not only reduce hydrocarbon extraction, but can produce higher value materials for building. We are looking to rethink the building industry by greatly streamlining global production and learning from the low-labor methods pioneered by composite manufacturing such as wind turbine blades, which are quick and cheap to produce. This technology can improve the sustainability and affordability of buildings — and holds the promise of faster, cheaper, greener, and more resilient modes of dwelling.

    Emissions reduction through innovation in the textile industry

    Collectively, the textile industry is responsible for over 4 billion metric tons of carbon dioxide equivalent per year, or 5 to 10 percent of global greenhouse gas emissions — more than aviation and maritime shipping combined. And the problem is only getting worse with the industry’s rapid growth. Under the current trajectory, consumption is projected to increase 30 percent by 2030, reaching 102 million tons. A diverse group of faculty and researchers led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Yuly Fuentes-Medel, project manager for fiber technologies and research advisor to the MIT Innovation Initiative, is developing groundbreaking innovations to reshape how textiles are selected, sourced, designed, manufactured, and used, and to create the structural changes required for sustained reductions in emissions by this industry.

    Q: Why has the textile industry been difficult to decarbonize?

    A: The industry currently operates under a linear model that relies heavily on virgin feedstock, at roughly 97 percent, yet recycles or downcycles less than 15 percent. Furthermore, recent trends in “fast fashion” have led to massive underutilization of apparel, such that products are discarded on average after only seven to 10 uses. In an industry with high volume and low margins, replacement technologies must achieve emissions reduction at scale while maintaining performance and economic efficiency.

    There are also technical barriers to adopting circular business models, from the challenge of dealing with products comprising fiber blends and chemical additives to the low maturity of recycling technologies. The environmental impacts of textiles and apparel have been estimated using life cycle analysis, and industry-standard indexes are under development to assess sustainability throughout the life cycle of a product, but information and tools are needed to model how new solutions will alter those impacts and include the consumer as an active player to keep our planet safe. This project seeks to deliver both the new solutions and the tools to evaluate their potential for impact.

    Q: Describe the five components of your program. What is the anticipated timeline for implementing these solutions?

    A: Our plan comprises five programmatic sections, which include (1) enabling a paradigm shift to sustainable materials using nontraditional, carbon-negative polymers derived from biomass and additives that facilitate recycling; (2) rethinking manufacturing with processes to structure fibers and fabrics for performance, waste reduction, and increased material efficiency; (3) designing textiles for value by developing products that are customized, adaptable, and multifunctional, and that interact with their environment to reduce energy consumption; (4) exploring consumer behavior change through human interventions that reduce emissions by encouraging the adoption of new technologies, increased utilization of products, and circularity; and (5) establishing carbon transparency with systems-level analyses that measure the impact of these strategies and guide decision making.

    We have proposed a five-year timeline with annual targets for each project. Conservatively, we estimate our program could reduce greenhouse gas emissions in the industry by 25 percent by 2030, with further significant reductions to follow.

    Tough-to-decarbonize transportation

    Airplanes, transoceanic ships, and freight trucks are critical to transporting people and delivering goods, and the cornerstone of global commerce, manufacturing, and tourism. But these vehicles also emit 3.7 billion tons of carbon dioxide annually and, left unchecked, they could take up a quarter of the remaining carbon budget by 2050. William Green, the Hoyt C. Hottel Professor in the Department Chemical Engineering, co-leads a multidisciplinary team with Steven Barrett, professor of aeronautics and astronautics and director of the MIT Laboratory for Aviation and the Environment, that is working to identify and advance economically viable technologies and policies for decarbonizing heavy duty trucking, shipping, and aviation. The Tough to Decarbonize Transportation research program aims to design and optimize fuel chemistry and production, vehicles, operations, and policies to chart the course to net-zero emissions by midcentury.

    Q: What are the highest priority focus areas of your research program?

    A: Hydrocarbon fuels made from biomass are the least expensive option, but it seems impractical, and probably damaging to the environment, to harvest the huge amount of biomass that would be needed to meet the massive and growing energy demands from these sectors using today’s biomass-to-fuel technology. We are exploring strategies to increase the amount of useful fuel made per ton of biomass harvested, other methods to make low-climate-impact hydrocarbon fuels, such as from carbon dioxide, and ways to make fuels that do not contain carbon at all, such as with hydrogen, ammonia, and other hydrogen carriers.

    These latter zero-carbon options free us from the need for biomass or to capture gigatons of carbon dioxide, so they could be a very good long-term solution, but they would require changing the vehicles significantly, and the construction of new refueling infrastructure, with high capital costs.

    Q: What are the scientific, technological, and regulatory barriers to scaling and implementing potential solutions?

    A: Reimagining an aviation, trucking, and shipping sector that connects the world and increases equity without creating more environmental damage is challenging because these vehicles must operate disconnected from the electrical grid and have energy requirements that cannot be met by batteries alone. Some of the concepts do not even exist in prototype yet, and none of the appealing options have been implemented at anywhere near the scale required.

    In most cases, we do not know the best way to make the fuel, and for new fuels the vehicles and refueling systems all need to be developed. Also, new fuels, or large-scale use of biomass, will introduce new environmental problems that need to be carefully considered, to ensure that decarbonization solutions do not introduce big new problems.

    Perhaps most difficult are the policy, economic, and equity issues. A new long-haul transportation system will be expensive, and everyone will be affected by the increased cost of shipping freight. To have the desired climate impact, the transport system must change in almost every country. During the transition period, we will need both the existing vehicle and fuel system to keep running smoothly, even as a new low-greenhouse system is introduced. We will also examine what policies could make that work and how we can get countries around the world to agree to implement them. More

  • in

    Finding her way to fusion

    “I catch myself startling people in public.”

    Zoe Fisher’s animated hands carry part of the conversation as she describes how her naturally loud and expressive laughter turned heads in the streets of Yerevan. There during MIT’s Independent Activities period (IAP), she was helping teach nuclear science at the American University of Armenia, before returning to MIT to pursue fusion research at the Plasma Science and Fusion Center (PSFC).

    Startling people may simply be in Fisher’s DNA. She admits that when she first arrived at MIT, knowing nothing about nuclear science and engineering (NSE), she chose to join that department’s Freshman Pre-Orientation Program (FPOP) “for the shock value.” It was a choice unexpected by family, friends, and mostly herself. Now in her senior year, a 2021 recipient of NSE’s Irving Kaplan Award for academic achievements by a junior and entering a fifth-year master of science program in nuclear fusion, Fisher credits that original spontaneous impulse for introducing her to a subject she found so compelling that, after exploring multiple possibilities, she had to return to it.

    Fisher’s venture to Armenia, under the guidance of NSE associate professor Areg Danagoulian, is not the only time she has taught oversees with MISTI’s Global Teaching Labs, though it is the first time she has taught nuclear science, not to mention thermodynamics and materials science. During IAP 2020 she was a student teacher at a German high school, teaching life sciences, mathematics, and even English to grades five through 12. And after her first year she explored the transportation industry with a mechanical engineering internship in Tuscany, Italy.

    By the time she was ready to declare her NSE major she had sampled the alternatives both overseas and at home, taking advantage of MIT’s Undergraduate Research Opportunities Program (UROP). Drawn to fusion’s potential as an endless source of carbon-free energy on earth, she decided to try research at the PSFC, to see if the study was a good fit. 

    Much fusion research at MIT has favored heating hydrogen fuel inside a donut-shaped device called a tokamak, creating plasma that is hot and dense enough for fusion to occur. Because plasma will follow magnetic field lines, these devices are wrapped with magnets to keep the hot fuel from damaging the chamber walls.

    Fisher was assigned to SPARC, the PSFC’s new tokamak collaboration with MIT startup Commonwealth Fusion Systems (CSF), which uses a game-changing high-temperature superconducting (HTS) tape to create fusion magnets that minimize tokamak size and maximize performance. Working on a database reference book for SPARC materials, she was finding purpose even in the most repetitive tasks. “Which is how I knew I wanted to stay in fusion,” she laughs.

    Fisher’s latest UROP assignment takes her — literally — deeper into SPARC research. She works in a basement laboratory in building NW13 nicknamed “The Vault,” on a proton accelerator whose name conjures an underworld: DANTE. Supervised by PSFC Director Dennis Whyte and postdoc David Fischer, she is exploring the effects of radiation damage on the thin HTS tape that is key to SPARC’s design, and ultimately to the success of ARC, a prototype working fusion power plant.

    Because repetitive bombardment with neutrons produced during the fusion process can diminish the superconducting properties of the HTS, it is crucial to test the tape repeatedly. Fisher assists in assembling and testing the experimental setups for irradiating the HTS samples. Fisher recalls her first project was installing a “shutter” that would allow researchers to control exactly how much radiation reached the tape without having to turn off the entire experiment.

    “You could just push the button — block the radiation — then unblock it. It sounds super simple, but it took many trials. Because first I needed the right size solenoid, and then I couldn’t find a piece of metal that was small enough, and then we needed cryogenic glue…. To this day the actual final piece is made partially of paper towels.”

    She shrugs and laughs. “It worked, and it was the cheapest option.”

    Fisher is always ready to find the fun in fusion. Referring to DANTE as “A really cool dude,” she admits, “He’s perhaps a bit fickle. I may or may not have broken him once.” During a recent IAP seminar, she joined other PSFC UROP students to discuss her research, and expanded on how a mishap can become a gateway to understanding.

    “The grad student I work with and I got to repair almost the entire internal circuit when we blew the fuse — which originally was a really bad thing. But it ended up being great because we figured out exactly how it works.”

    Fisher’s upbeat spirit makes her ideal not only for the challenges of fusion research, but for serving the MIT community. As a student representative for NSE’s Diversity, Equity and Inclusion Committee, she meets monthly with the goal of growing and supporting diversity within the department.

    “This opportunity is impactful because I get my voice, and the voices of my peers, taken seriously,” she says. “Currently, we are spending most of our efforts trying to identify and eliminate hurdles based on race, ethnicity, gender, and income that prevent people from pursuing — and applying to — NSE.”

    To break from the lab and committees, she explores the Charles River as part of MIT’s varsity sailing team, refusing to miss a sunset. She also volunteers as an FPOP mentor, seeking to provide incoming first-years with the kind of experience that will make them want to return to the topic, as she did.

    She looks forward to continuing her studies on the HTS tapes she has been irradiating, proposing to send a current pulse above the critical current through the tape, to possibly anneal any defects from radiation, which would make repairs on future fusion power plants much easier.

    Fisher credits her current path to her UROP mentors and their infectious enthusiasm for the carbon-free potential of fusion energy.

    “UROPing around the PSFC showed me what I wanted to do with my life,” she says. “Who doesn’t want to save the world?” More

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    Q&A: Climate Grand Challenges finalists on building equity and fairness into climate solutions

    Note: This is the first in a four-part interview series that will highlight the work of the Climate Grand Challenges finalists, ahead of the April announcement of several multiyear, flagship projects.

    The finalists in MIT’s first-ever Climate Grand Challenges competition each received $100,000 to develop bold, interdisciplinary research and innovation plans designed to attack some of the world’s most difficult and unresolved climate problems. The 27 teams are addressing four Grand Challenge problem areas: building equity and fairness into climate solutions; decarbonizing complex industries and processes; removing, managing, and storing greenhouse gases; and using data and science for improved climate risk forecasting.  

    In a conversation prepared for MIT News, faculty from three of the teams in the competition’s “Building equity and fairness into climate solutions” category share their thoughts on the need for inclusive solutions that prioritize disadvantaged and vulnerable populations, and discuss how they are working to accelerate their research to achieve the greatest impact. The following responses have been edited for length and clarity.

    The Equitable Resilience Framework

    Any effort to solve the most complex global climate problems must recognize the unequal burdens borne by different groups, communities, and societies — and should be equitable as well as effective. Janelle Knox-Hayes, associate professor in the Department of Urban Studies and Planning, leads a team that is developing processes and practices for equitable resilience, starting with a local pilot project in Boston over the next five years and extending to other cities and regions of the country. The Equitable Resilience Framework (ERF) is designed to create long-term economic, social, and environmental transformations by increasing the capacity of interconnected systems and communities to respond to a broad range of climate-related events. 

    Q: What is the problem you are trying to solve?

    A: Inequity is one of the severe impacts of climate change and resonates in both mitigation and adaptation efforts. It is important for climate strategies to address challenges of inequity and, if possible, to design strategies that enhance justice, equity, and inclusion, while also enhancing the efficacy of mitigation and adaptation efforts. Our framework offers a blueprint for how communities, cities, and regions can begin to undertake this work.

    Q: What are the most significant barriers that have impacted progress to date?

    A: There is considerable inertia in policymaking. Climate change requires a rethinking, not only of directives but pathways and techniques of policymaking. This is an obstacle and part of the reason our project was designed to scale up from local pilot projects. Another consideration is that the private sector can be more adaptive and nimble in its adoption of creative techniques. Working with the MIT Climate and Sustainability Consortium there may be ways in which we could modify the ERF to help companies address similar internal adaptation and resilience challenges.

    Protecting and enhancing natural carbon sinks

    Deforestation and forest degradation of strategic ecosystems in the Amazon, Central Africa, and Southeast Asia continue to reduce capacity to capture and store carbon through natural systems and threaten even the most aggressive decarbonization plans. John Fernandez, professor in the Department of Architecture and director of the Environmental Solutions Initiative, reflects on his work with Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and Artificial Intelligence Laboratory, and Joann de Zegher, assistant professor of Operations Management at MIT Sloan, to protect tropical forests by deploying a three-part solution that integrates targeted technology breakthroughs, deep community engagement, and innovative bioeconomic opportunities. 

    Q: Why is the problem you seek to address a “grand challenge”?

    A: We are trying to bring the latest technology to monitoring, assessing, and protecting tropical forests, as well as other carbon-rich and highly biodiverse ecosystems. This is a grand challenge because natural sinks around the world are threatening to release enormous quantities of stored carbon that could lead to runaway global warming. When combined with deep community engagement, particularly with indigenous and afro-descendant communities, this integrated approach promises to deliver substantially enhanced efficacy in conservation coupled to robust and sustainable local development.

    Q: What is known about this problem and what questions remain unanswered?

    A: Satellites, drones, and other technologies are acquiring more data about natural carbon sinks than ever before. The problem is well-described in certain locations such as the eastern Amazon, which has shifted from a net carbon sink to now a net positive carbon emitter. It is also well-known that indigenous peoples are the most effective stewards of the ecosystems that store the greatest amounts of carbon. One of the key questions that remains to be answered is determining the bioeconomy opportunities inherent within the natural wealth of tropical forests and other important ecosystems that are important to sustained protection and conservation.

    Reducing group-based disparities in climate adaptation

    Race, ethnicity, caste, religion, and nationality are often linked to vulnerability to the adverse effects of climate change, and if left unchecked, threaten to exacerbate long standing inequities. A team led by Evan Lieberman, professor of political science and director of the MIT Global Diversity Lab and MIT International Science and Technology Initiatives, Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics, and Siqi Zheng, professor of urban and real estate sustainability in the Center for Real Estate and the Department of Urban Studies and Planning, is seeking to  reduce ethnic and racial group-based disparities in the capacity of urban communities to adapt to the changing climate. Working with partners in nine coastal cities, they will measure the distribution of climate-related burdens and resiliency through satellites, a custom mobile app, and natural language processing of social media, to help design and test communication campaigns that provide accurate information about risks and remediation to impacted groups. 

    Q: How has this problem evolved?

    A: Group-based disparities continue to intensify within and across countries, owing in part to some randomness in the location of adverse climate events, as well as deep legacies of unequal human development. In turn, economically and politically privileged groups routinely hoard resources for adaptation. In a few cases — notably the United States, Brazil, and with respect to climate-related migrancy, in South Asia — there has been a great deal of research documenting the extent of such disparities. However, we lack common metrics, and for the most part, such disparities are only understood where key actors have politicized the underlying problems. In much of the world, relatively vulnerable and excluded groups may not even be fully aware of the nature of the challenges they face or the resources they require.

    Q: Who will benefit most from your research? 

    A: The greatest beneficiaries will be members of those vulnerable groups who lack the resources and infrastructure to withstand adverse climate shocks. We believe that it will be important to develop solutions such that relatively privileged groups do not perceive them as punitive or zero-sum, but rather as long-term solutions for collective benefit that are both sound and just. More

  • in

    Can the world meet global climate targets without coordinated global action?

    Like many of its predecessors, the 2021 United Nations Climate Change Conference (COP26) in Glasgow, Scotland concluded with bold promises on international climate action aimed at keeping global warming well below 2 degrees Celsius, but few concrete plans to ensure that those promises will be kept. While it’s not too late for the Paris Agreement’s nearly 200 signatory nations to take concerted action to cap global warming at 2 C — if not 1.5 C — there is simply no guarantee that they will do so. If they fail, how much warming is the Earth likely to see in the 21st century and beyond?

    A new study by researchers at the MIT Joint Program on the Science and Policy of Global Change and the Shell Scenarios Team projects that without a globally coordinated mitigation effort to reduce greenhouse gas emissions, the planet’s average surface temperature will reach 2.8 C, much higher than the “well below 2 C” level to which the Paris Agreement aspires, but a lot lower than what many widely used “business-as-usual” scenarios project.  

    Recognizing the limitations of such scenarios, which generally assume that historical trends in energy technology choices and climate policy inaction will persist for decades to come, the researchers have designed a “Growing Pressures” scenario that accounts for mounting social, technological, business, and political pressures that are driving a transition away from fossil-fuel use and toward a low-carbon future. Such pressures have already begun to expand low-carbon technology and policy options, which, in turn, have escalated demand to utilize those options — a trend that’s expected to self-reinforce. Under this scenario, an array of future actions and policies cause renewable energy and energy storage costs to decline; fossil fuels to be phased out; electrification to proliferate; and emissions from agriculture and industry to be sharply reduced.

    Incorporating these growing pressures in the MIT Joint Program’s integrated model of Earth and human systems, the study’s co-authors project future energy use, greenhouse gas emissions, and global average surface temperatures in a world that fails to implement coordinated, global climate mitigation policies, and instead pursues piecemeal actions at mostly local and national levels.

    “Few, if any, previous studies explore scenarios of how piecemeal climate policies might plausibly unfold into the future and impact global temperature,” says MIT Joint Program research scientist Jennifer Morris, the study’s lead author. “We offer such a scenario, considering a future in which the increasingly visible impacts of climate change drive growing pressure from voters, shareholders, consumers, and investors, which in turn drives piecemeal action by governments and businesses that steer investments away from fossil fuels and toward low-carbon alternatives.”

    In the study’s central case (representing the mid-range climate response to greenhouse gas emissions), fossil fuels persist in the global energy mix through 2060 and then slowly decline toward zero by 2130; global carbon dioxide emissions reach near-zero levels by 2130 (total greenhouse gas emissions decline to near-zero by 2150); and global surface temperatures stabilize at 2.8 C by 2150, 2.5 C lower than a widely used “business-as-usual” projection. The results appear in the journal Environmental Economics and Policy Studies.

    Such a transition could bring the global energy system to near-zero emissions, but more aggressive climate action would be needed to keep global temperatures well below 2 C in alignment with the Paris Agreement.

    “While we fully support the need to decarbonize as fast as possible, it is critical to assess realistic alternative scenarios of world development,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “We investigate plausible actions that could bring society closer to the long-term goals of the Paris Agreement. To actually meet those goals will require an accelerated transition away from fossil energy through a combination of R&D, technology deployment, infrastructure development, policy incentives, and business practices.”

    The study was funded by government, foundation, and industrial sponsors of the MIT Joint Program, including Shell International Ltd. More

  • in

    Tuning in to invisible waves on the JET tokamak

    Research scientist Alex Tinguely is readjusting to Cambridge and Boston.

    As a postdoc with the Plasma Science and Fusion Center (PSFC), the MIT graduate spent the last two years in Oxford, England, a city he recalls can be traversed entirely “in the time it takes to walk from MIT to Harvard.” With its ancient stone walls, cathedrals, cobblestone streets, and winding paths, that small city was his home base for a big project: JET, a tokamak that is currently the largest operating magnetic fusion energy experiment in the world.

    Located at the Culham Center for Fusion Energy (CCFE), part of the U.K. Atomic Energy Authority, this key research center of the European Fusion Program has recently announced historic success. Using a 50-50 deuterium-tritium fuel mixture for the first time since 1997, JET established a fusion power record of 10 megawatts output over five seconds. It produced 59 megajoules of fusion energy, more than doubling the 22 megajoule record it set in 1997. As a member of the JET Team, Tinguely has overseen the measurement and instrumentation systems (diagnostics) contributed by the MIT group.

    A lucky chance

    The postdoctoral opportunity arose just as Tinguely was graduating with a PhD in physics from MIT. Managed by Professor Miklos Porkolab as the principal investigator for over 20 years, this postdoctoral program has prepared multiple young researchers for careers in fusion facilities around the world. The collaborative research provided Tinguely the chance to work on a fusion device that would be adding tritium to the usual deuterium fuel.

    Fusion, the process that fuels the sun and other stars, could provide a long-term source of carbon-free power on Earth, if it can be harnessed. For decades researchers have tried to create an artificial star in a doughnut-shaped bottle, or “tokamak,” using magnetic fields to keep the turbulent plasma fuel confined and away from the walls of its container long enough for fusion to occur.

    In his graduate student days at MIT, Tinguely worked on the PSFC’s Alcator C-Mod tokamak, now decommissioned, which, like most magnetic fusion devices, used deuterium to create the plasmas for experiments. JET, since beginning operation in 1983, has done the same, later joining a small number of facilities that added tritium, a radioactive isotope of hydrogen. While this addition increases the amount of fusion, it also creates much more radiation and activation.

    Tinguely considers himself fortunate to have been placed at JET.

    “There aren’t that many operating tokamaks in the U.S. right now,” says Tinguely, “not to mention one that would be running deuterium-tritium (DT), which hasn’t been run for over 20 years, and which would be making some really important measurements. I got a very lucky spot where I was an MIT postdoc, but I lived in Oxford, working on a very international project.”

    Strumming magnetic field lines

    The measurements that interest Tinguely are of low-frequency electromagnetic waves in tokamak plasmas. Tinguely uses an antenna diagnostic developed by MIT, EPFL Swiss Plasma Center, and CCFE to probe the so-called Alfvén eigenmodes when they are stable, before the energetic alpha particles produced by DT fusion plasmas can drive them toward instability.

    What makes MIT’s “Alfvén Eigenmode Active Diagnostic” essential is that without it researchers cannot see, or measure, stable eigenmodes. Unstable modes show up clearly as magnetic fluctuations in the data, but stable waves are invisible without prompting from the antenna. These measurements help researchers understand the physics of Alfvén waves and their potential for degrading fusion performance, providing insights that will be increasingly important for future DT fusion devices.

    Tinguely likens the diagnostic to fingers on guitar strings.

    “The magnetic field lines in the tokamak are like guitar strings. If you have nothing to give energy to the strings — or give energy to the waves of the magnetic field lines — they just sit there, they don’t do anything. The energetic plasma particles can essentially ‘play the guitar strings,’ strum the magnetic field lines of the plasma, and that’s when you can see the waves in your plasma. But if the energetic particle drive of the waves is not strong enough you won’t see them, so you need to come along and ‘pluck the strings’ with our antenna. And that’s how you learn some information about the waves.”

    Much of Tinguely’s experience on JET took place during the Covid-19 pandemic, when off-site operation and analysis were the norm. However, because the MIT diagnostic needed to be physically turned on and off, someone from Tinguely’s team needed to be on site twice a day, a routine that became even less convenient when tritium was introduced.

    “When you have deuterium and tritium, you produce a lot of neutrons. So, some of the buildings became off-limits during operation, which meant they had to be turned on really early in the morning, like 6:30 a.m., and then turned off very late at night, around 10:30 p.m.”

    Looking to the future

    Now a research scientist at the PSFC, Tinguely continues to work at JET remotely. He sometimes wishes he could again ride that train from Oxford to Culham — which he fondly remembers for its clean, comfortable efficiency — to see work colleagues and to visit local friends. The life he created for himself in England included practice and performance with the 125-year-old Oxford Bach Choir, as well as weekly dinner service at The Gatehouse, a facility that offers free support for the local homeless and low-income communities.

    “Being back is exciting too,” he says. “It’s fun to see how things have changed, how people and projects have grown, what new opportunities have arrived.”

    He refers specifically to a project that is beginning to take up more of his time: SPARC, the tokamak the PSFC supports in collaboration with Commonwealth Fusion Systems. Designed to use deuterium-tritium to make net fusion gains, SPARC will be able to use the latest research on JET to advantage. Tinguely is already exploring how his expertise with Alfvén eigenmodes can support the experiment.

    “I actually had an opportunity to do my PhD — or DPhil as they would call it — at Oxford University, but I went to MIT for grad school instead,” Tinguely reveals. “So, this is almost like closure, in a sense. I got to have my Oxford experience in the end, just in a different way, and have the MIT experience too.”

    He adds, “And I see myself being here at MIT for some time.” More