More stories

  • in

    A new heat engine with no moving parts is as efficient as a steam turbine

    Engineers at MIT and the National Renewable Energy Laboratory (NREL) have designed a heat engine with no moving parts. Their new demonstrations show that it converts heat to electricity with over 40 percent efficiency — a performance better than that of traditional steam turbines.

    The heat engine is a thermophotovoltaic (TPV) cell, similar to a solar panel’s photovoltaic cells, that passively captures high-energy photons from a white-hot heat source and converts them into electricity. The team’s design can generate electricity from a heat source of between 1,900 to 2,400 degrees Celsius, or up to about 4,300 degrees Fahrenheit.

    The researchers plan to incorporate the TPV cell into a grid-scale thermal battery. The system would absorb excess energy from renewable sources such as the sun and store that energy in heavily insulated banks of hot graphite. When the energy is needed, such as on overcast days, TPV cells would convert the heat into electricity, and dispatch the energy to a power grid.

    With the new TPV cell, the team has now successfully demonstrated the main parts of the system in separate, small-scale experiments. They are working to integrate the parts to demonstrate a fully operational system. From there, they hope to scale up the system to replace fossil-fuel-driven power plants and enable a fully decarbonized power grid, supplied entirely by renewable energy.

    “Thermophotovoltaic cells were the last key step toward demonstrating that thermal batteries are a viable concept,” says Asegun Henry, the Robert N. Noyce Career Development Professor in MIT’s Department of Mechanical Engineering. “This is an absolutely critical step on the path to proliferate renewable energy and get to a fully decarbonized grid.”

    Henry and his collaborators have published their results today in the journal Nature. Co-authors at MIT include Alina LaPotin, Kevin Schulte, Kyle Buznitsky, Colin Kelsall, Andrew Rohskopf, and Evelyn Wang, the Ford Professor of Engineering and head of the Department of Mechanical Engineering, along with collaborators at NREL in Golden, Colorado.

    Jumping the gap

    More than 90 percent of the world’s electricity comes from sources of heat such as coal, natural gas, nuclear energy, and concentrated solar energy. For a century, steam turbines have been the industrial standard for converting such heat sources into electricity.

    On average, steam turbines reliably convert about 35 percent of a heat source into electricity, with about 60 percent representing the highest efficiency of any heat engine to date. But the machinery depends on moving parts that are temperature- limited. Heat sources higher than 2,000 degrees Celsius, such as Henry’s proposed thermal battery system, would be too hot for turbines.

    In recent years, scientists have looked into solid-state alternatives — heat engines with no moving parts, that could potentially work efficiently at higher temperatures.

    “One of the advantages of solid-state energy converters are that they can operate at higher temperatures with lower maintenance costs because they have no moving parts,” Henry says. “They just sit there and reliably generate electricity.”

    Thermophotovoltaic cells offered one exploratory route toward solid-state heat engines. Much like solar cells, TPV cells could be made from semiconducting materials with a particular bandgap — the gap between a material’s valence band and its conduction band. If a photon with a high enough energy is absorbed by the material, it can kick an electron across the bandgap, where the electron can then conduct, and thereby generate electricity — doing so without moving rotors or blades.

    To date, most TPV cells have only reached efficiencies of around 20 percent, with the record at 32 percent, as they have been made of relatively low-bandgap materials that convert lower-temperature, low-energy photons, and therefore convert energy less efficiently.

    Catching light

    In their new TPV design, Henry and his colleagues looked to capture higher-energy photons from a higher-temperature heat source, thereby converting energy more efficiently. The team’s new cell does so with higher-bandgap materials and multiple junctions, or material layers, compared with existing TPV designs.

    The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold. The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.

    The team tested the cell’s efficiency by placing it over a heat flux sensor — a device that directly measures the heat absorbed from the cell. They exposed the cell to a high-temperature lamp and concentrated the light onto the cell. They then varied the bulb’s intensity, or temperature, and observed how the cell’s power efficiency — the amount of power it produced, compared with the heat it absorbed — changed with temperature. Over a range of 1,900 to 2,400 degrees Celsius, the new TPV cell maintained an efficiency of around 40 percent.

    “We can get a high efficiency over a broad range of temperatures relevant for thermal batteries,” Henry says.

    The cell in the experiments is about a square centimeter. For a grid-scale thermal battery system, Henry envisions the TPV cells would have to scale up to about 10,000 square feet (about a quarter of a football field), and would operate in climate-controlled warehouses to draw power from huge banks of stored solar energy. He points out that an infrastructure exists for making large-scale photovoltaic cells, which could also be adapted to manufacture TPVs.

    “There’s definitely a huge net positive here in terms of sustainability,” Henry says. “The technology is safe, environmentally benign in its life cycle, and can have a tremendous impact on abating carbon dioxide emissions from electricity production.”

    This research was supported, in part, by the U.S. Department of Energy. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    MIT Energy Conference focuses on climate’s toughest challenges

    This year’s MIT Energy Conference, the largest student-led event of its kind, included keynote talks and panels that tackled some of the thorniest remaining challenges in the global effort to cut back on climate-altering emissions. These include the production of construction materials such as steel and cement, and the role of transportation including aviation and shipping. While the challenges are formidable, approaches incorporating methods such as fusion, heat pumps, energy efficiency, and the use of hydrogen hold promise, participants said.

    The two-day conference, held on March 31 and April 1 for more than 900 participants, included keynote lectures, 14 panel discussions, a fireside chat, networking events, and more. The event this year included the final round of the annual MIT Climate and Energy Prize, whose winning team receives $100,000 and other support. The prize, awarded since 2007, has led to the creation of more than 220 companies and $1.1 billion in investments.

    This year’s winner is a project that hopes to provide an innovative, efficient waterless washing machine aimed at the vast majority of the world’s people, who still do laundry by hand.

    “A truly consequential moment in history”

    In his opening keynote address Fatih Birol, executive director of the International Energy Agency, noted that this year’s conference was taking place during the unprovoked invasion of Ukraine by Russia, a leading gas and oil exporter. As a result, “global oil markets are going through a major turmoil,” he said.

    He said that Russian oil exports are expected to drop by 3 million barrels a day, and that international efforts to release reserves and promote increased production elsewhere will help, but will not suffice. “We have to look to other measures” to make up the shortfall, he said, noting that his agency has produced a 10-point plan of measures to help reduce global demand for oil.

    Europe gets 45 percent of its natural gas from Russia, and the agency also has developed a 10-point plan to help alleviate expected shortages there, including measures to improve energy efficiency in homes and industries, promote renewable heating sources, and postpone retirement of some nuclear plants. But he emphasized that “our goals to reach our climate targets should not be yet another victim of Mr. Putin and his allies.”  Unfortunately, Birol said, “I see that addressing climate change is sliding down in the policy agenda of many governments.”

    But he sees reasons for optimism as well, in terms of the feasibility of achieving the global emissions reduction target, agreed to by countries representing 80 percent of the global economy, of reaching net zero carbon dioxide emissions by 2050. The IEA has developed a roadmap for the entire energy sector to get there, which is now used by many governments as a benchmark, according to Birol.

    In addition, the trend is already clear, he said. “More than 90 percent of all power plants installed in the world [last year] were renewable energy,” mainly solar and wind. And 10 percent of cars sold worldwide last year, and 20 percent in Europe, were electric cars. “Please remember that in 2019 it was only 2 percent!” he said. He also predicted that “nuclear is going to make a comeback in many countries,” both in terms of large plants and newer small modular reactors.

    Birol said that “I hope that the current crisis gives governments the impetus to address the energy security concerns, to reach our climate goals, and … [to] choose the right direction at this very important turning point.”

    The conference’s second day began with keynote talks by Gina McCarthy, national climate advisor at the White House Office of Domestic Climate Policy, and Maria Zuber, MIT’s vice president for research. In her address, Zuber said, “This conference comes at a truly consequential moment in history — a moment that puts into stark relief the enormous risks created by our current fossil-fuel based energy system — risks we cannot continue to accept.”

    She added that “time is not on our side.” To meet global commitments for limiting climate impacts, the world needs to reduce emissions by about half by 2030, and get to net zero by 2050. “In other words, we need to transform our entire global energy system in a few decades,” she said. She cited MIT’s “Fast Forward” climate action plan, issued last year, as presenting the two tracks that the world needs to pursue simultaneously: going as far as possible, as fast as possible, with the tools that exist now, while also innovating and investing in new ideas, technologies, practices, and institutions that may be needed to reach the net-zero goal.

    On the first track, she said, citing an IEA report, “from here until 2040, we can get most of the emissions reductions we need with technologies that are currently available or on the verge of becoming commercially available.” These include electrifying and boosting efficiency in buildings, industry, and transportation; increasing the portion of electricity coming from emissions-free sources; and investing in new infrastructure such as electric vehicle charging stations.

    But more than that is needed, she pointed out. For example, the amount of methane that leaks away into the atmosphere from fossil fuel operations is equivalent to all the natural gas used in Europe’s power sector, Zuber said. Recovering and selling that methane can dramatically reduce global methane emissions, often at little or no cost.

    For the longer run, “we need track-two solutions to decarbonize tough industries like aviation, shipping, chemicals, concrete, and steel,” and to remove carbon dioxide from the atmosphere. She described some of the promising technologies that are in the pipeline. Fusion, for example, has moved from being a scientific challenge to an engineering problem whose solution seems well underway, she said.

    Another important area is food-related systems, which currently account for a third of all global emissions. For example, fertilizer production uses a very energy-intensive process, but work on plants engineered to fix nitrogen directly could make a significant dent.

    These and several other advanced research areas may not all pan out, but some undoubtedly will, and will help curb climate change as well as create new jobs and reduce pollution.

    Though the problems we face are complex, they are not insurmountable, Zuber said. “We don’t need a miracle. What we need is to move along the two tracks I’ve outlined with determination, ingenuity, and fierce urgency.”

    The promise and challenges of hydrogen

    Other conference speakers took on some of the less-discussed but crucial areas that also need to be addressed in order to limit global warming to 1.5 degrees. Heavy transportation, and aviation in particular, have been considered especially challenging. In his keynote address, Glenn Llewellyn, vice president for zero-emission aircraft at Airbus, outlined several approaches his company is working on to develop competitive midrange alternative airliners by 2035 that use either batteries or fuel cells powered by hydrogen. The early-stage designs demonstrate that, contrary to some projections, there is a realistic pathway to weaning that industry from its present reliance on fossil fuel, chiefly kerosene.

    Hydrogen has real potential as an aviation fuel, he said, either directly for use in fuel cells for power or burned directly for propulsion, or indirectly as a feedstock for synthetic fuels. Both are being studied by the company, he said, including a hybrid model that uses both hydrogen fuel cells and hydrogen-fueled jet engines. The company projects a range of 2,000 nautical miles for a jet carrying 200 to 300 passengers, he said — all with no direct emissions and no contrails.

    But this vision will not be practical, Llewellyn said, unless economies of scale help to significantly lower the cost of hydrogen production. “Hydrogen is at the hub of aviation decarbonization,” he said. But that kind of price reduction seems quite feasible, he said, given that other major industries are also seriously looking at the use of hydrogen for their own decarbonization plans, including the production of steel and cement.

    Such uses were the subject of a panel discussion entitled “Deploying the Hydrogen Economy.” Hydrogen production technology exists, but not nearly at the scale that’s needed, which is about 500 million tons a year, pointed out moderator Dharik Mallapragada of the MIT Energy Initiative.

    Yet in some applications, the use of hydrogen both reduces emissions and is economically competitive. Preeti Pande of Plug Power said that her company, which produces hydrogen fuel cells, has found a significant market in an unexpected place: fork lifts, used in warehouses and factories worldwide. It turns out that replacing current battery-operated versions with fuel cell versions is a win-win for the companies that use them, saving money while helping to meet decarbonization goals.

    Lindsay Ashby of Avangrid Renewables said that the company has installed fuel-cell buses in Barcelona that run entirely on hydrogen generated by solar panels. The company is also building a 100-megawatt solar facility to produce hydrogen for the production of fertilizer, another major industry in need of decarbonization because of its large emissions footprint. And Brett Perleman of the Center for Houston’s Future said of his city that “we’re already a hydrogen hub today, just not green hydrogen” since the gas is currently mostly produced as a byproduct of fossil fuels. But that is changing rapidly, he said, and Houston, along with several other cities, aims to be a center of activity for hydrogen produced from renewable, non-carbon-emitting sources. They aim to be producing 1,000 tons a day by 2028, “and I think we’ll end up exceeding that,” he said.

    For industries that can switch to renewably generated electricity, that is typically the best choice, Perleman said. “But for those that can’t, hydrogen is a great option,” and that includes aviation, shipping, and rail. “The big oil companies all have plans in place” to develop clean hydrogen production, he said. “It’s not just a dream, but a reality.”

    For shipping, which tends to rely on bunker fuel, a particularly high-emissions fossil fuel, another potential option could be a new generation of small nuclear plants, said Jeff Navin of Terrapower, a company currently developing such units. “Finding replacements for coal, oil, or natural gas for industrial purposes is very hard,” he said, but often what these processes require is consistent high heat, which nuclear can deliver, as long as costs and regulatory issues can be resolved.  

    MIT professor of nuclear engineering Jacopo Buongiorno pointed out that the primary reasons for delays and cost overruns in nuclear plants have had to do with issues at the construction site, many of which could be alleviated by having smaller, factory-built modular plants, or by building multiple units at a time of a standardized design. If the government would take on the nuclear waste disposal, as some other countries have done, then nuclear power could play an important part in the decarbonization of many industries, he said.

    Student-led startups

    The two-day conference concluded with the final round of the annual MIT Climate and Energy Prize, consisting of the five finalist teams presenting brief pitches for their startup company ideas, followed by questions from the panel of judges. This year’s finalists included a team called Muket, dedicated to finding ways of reducing methane emissions from cattle and dairy farms. Feed additives or other measures could cut the emissions by 50 percent, the team estimates.

    A team called Ivu Biologics described a system for incorporating nitrogen-fixing microbes into the coatings of seeds, thereby reducing the need for added fertilizers, whose production is a major greenhouse gas source. The company is making use of seed-coating technology developed at MIT over the last few years. Another team, called Mesophase, also based on MIT-developed technology, aims to replace the condensers used in power plants and other industrial systems with much more efficient versions, thus increasing the energy output from a given amount of fuel or other heat source.

    A team called TerraTrade aims to facilitate the adoption of power purchase agreements by companies, institutions and governments, by acting as a kind of broker to create and administer such agreements, making it easier for even smaller entities to take part in these plans, which help to enable rapid development of renewable fossil-fuel-free energy production.

    The grand prize of $100,000 was awarded to a team called Ultropia, which is developing a combined clothes washer and drier that uses ultrasound instead of water for its cleaning. The system does use a small amount of water, but this can be recycled, making these usable even in areas where water availability is limited. The devices could have a great impact on the estimated 6 billion people in the world today who are still limited to washing clothes by hand, the team says, and because the machines would be so efficient, they would require very little energy to run — a significant improvement over the wider adoption of conventional washers and driers. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    New power sources

    In the mid-1990s, a few energy activists in Massachusetts had a vision: What if citizens had choice about the energy they consumed? Instead of being force-fed electricity sources selected by a utility company, what if cities, towns, and groups of individuals could purchase power that was cleaner and cheaper?

    The small group of activists — including a journalist, the head of a small nonprofit, a local county official, and a legislative aide — drafted model legislation along these lines that reached the state Senate in 1995. The measure stalled out. In 1997, they tried again. Massachusetts legislators were busy passing a bill to reform the state power industry in other ways, and this time the activists got their low-profile policy idea included in it — as a provision so marginal it only got a brief mention in The Boston Globe’s coverage of the bill.

    Today, this idea, often known as Community Choice Aggregation (CCA), is used by roughly 36 million people in the U.S., or 11 percent of the population. Local residents, as a bloc, purchase energy with certain specifications attached, and over 1,800 communities have adopted CCA in six states, with others testing CCA pilot programs. From such modest beginnings, CCA has become a big deal.

    “It started small, then had a profound impact,” says David Hsu, an associate professor at MIT who studies energy policy issues. Indeed, the trajectory of CCA is so striking that Hsu has researched its origins, combing through a variety of archival sources and interviewing the principals. He has now written a journal article examining the lessons and implications of this episode.

    Hsu’s paper, “Straight out of Cape Cod: The origin of community choice aggregation and its spread to other states,” appears in advance online form in the journal Energy Research and Social Science, and in the April print edition of the publication.

    “I wanted to show people that a small idea could take off into something big,” Hsu says. “For me that’s a really hopeful democratic story, where people could do something without feeling they had to take on a whole giant system that wouldn’t immediately respond to only one person.”

    Local control

    Aggregating consumers to purchase energy was not a novelty in the 1990s. Companies within many industries have long joined forces to gain purchasing power for energy. And Rhode Island tried a form of CCA slightly earlier than Massachusetts did.

    However, it is the Massachusetts model that has been adopted widely: Cities or towns can require power purchases from, say, renewable sources, while individual citizens can opt out of those agreements. More state funding (for things like efficiency improvements) is redirected to cities and towns as well.

    In both ways, CCA policies provide more local control over energy delivery. They have been adopted in California, Illinois, New Jersey, New York, and Ohio. Meanwhile, Maryland, New Hampshire, and Virginia have recently passed similar legislation (also known as municipal or government aggregation, or community choice energy).

    For cities and towns, Hsu says, “Maybe you don’t own outright the whole energy system, but let’s take away one particular function of the utility, which is procurement.”

    That vision motivated a handful of Massachusetts activists and policy experts in the 1990s, including journalist Scott Ridley, who co-wrote a 1986 book, “Power Struggle,” with the University of Massachusetts historian Richard Rudolph and had spent years thinking about ways to reconfigure the energy system; Matt Patrick, chair of a local nonprofit focused on energy efficiency; Rob O’Leary, a local official in Barnstable County, on Cape Cod; and Paul Fenn, a staff aide to the state senator who chaired the legislature’s energy committee.

    “It started with these political activists,” Hsu says.

    Hsu’s research emphasizes several lessons to be learned from the fact the legislation first failed in 1995, before unexpectedly passing in 1997. Ridley remained an author and public figure; Patrick and O’Leary would each eventually be elected to the state legislature, but only after 2000; and Fenn had left his staff position by 1995 and worked with the group long-distance from California (where he became a long-term advocate about the issue). Thus, at the time CCA passed in 1997, none of its main advocates held an insider position in state politics. How did it succeed?

    Lessons of the legislation

    In the first place, Hsu believes, a legislative process resembles what the political theorist John Kingdon has called a “multiple streams framework,” in which “many elements of the policymaking process are separate, meandering, and uncertain.” Legislation isn’t entirely controlled by big donors or other interest groups, and “policy entrepreneurs” can find success in unpredictable windows of opportunity.

    “It’s the most true-to-life theory,” says Hsu.  

    Second, Hsu emphasizes, finding allies is crucial. In the case of CCA, that came about in a few ways. Many towns in Massachusetts have a town-level legislature known as Town Meeting; the activists got those bodies in about 20 towns to pass nonbinding resolutions in favor of community choice. O’Leary helped create a regional county commission in Barnstable County, while Patrick crafted an energy plan for it. High electricity rates were affecting all of Cape Cod at the time, so community choice also served as an economic benefit for Cape Cod’s working-class service-industry employees. The activists also found that adding an opt-out clause to the 1997 version appealed to legislators, who would support CCA if their constituents were not all bound to it.

    “You really have to stick with it, and you have to look for coalition partners,” Hsu says. “It’s fun to hear them [the activists] talk about going to Town Meetings, and how they tried to build grassroots support. If you look for allies, you can get things done. [I hope] the people can see [themselves] in other people’s activism even if they’re not exactly the same as you are.”

    By 1997, the CCA legislation had more geographic support, was understood as both an economic and environmental benefit for voters, and would not force membership upon anyone. The activists, while giving media interviews, and holding conferences, had found additional traction in the principle of citizen choice.

    “It’s interesting to me how the rhetoric of [citizen] choice and the rhetoric of democracy proves to be effective,” Hsu says. “Legislators feel like they have to give everyone some choice. And it expresses a collective desire for a choice that the utilities take away by being monopolies.”

    He adds: “We need to set out principles that shape systems, rather than just taking the system as a given and trying to justify principles that are 150 years old.”

    One last element in CCA passage was good timing. The governor and legislature in Massachusetts were already seeking a “grand bargain” to restructure electricity delivery and loosen the grip of utilities; the CCA fit in as part of this larger reform movement. Still, CCA adoption has been gradual; about one-third of Massachusetts towns with CCA have only adopted it within the last five years.

    CCA’s growth does not mean it’s invulnerable to repeal or utility-funded opposition efforts — “In California there’s been pretty intense pushback,” Hsu notes. Still, Hsu concludes, the fact that a handful of activists could start a national energy-policy movement is a useful reminder that everyone’s actions can make a difference.

    “It wasn’t like they went charging through a barricade, they just found a way around it,” Hsu says. “I want my students to know you can organize and rethink the future. It takes some commitment and work over a long time.” More

  • in

    Solar-powered system offers a route to inexpensive desalination

    An estimated two-thirds of humanity is affected by shortages of water, and many such areas in the developing world also face a lack of dependable electricity. Widespread research efforts have thus focused on ways to desalinate seawater or brackish water using just solar heat. Many such efforts have run into problems with fouling of equipment caused by salt buildup, however, which often adds complexity and expense.

    Now, a team of researchers at MIT and in China has come up with a solution to the problem of salt accumulation — and in the process developed a desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could also be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

    The findings are described today in the journal Nature Communications, in a paper by MIT graduate student Lenan Zhang, postdoc Xiangyu Li, professor of mechanical engineering Evelyn Wang, and four others.

    “There have been a lot of demonstrations of really high-performing, salt-rejecting, solar-based evaporation designs of various devices,” Wang says. “The challenge has been the salt fouling issue, that people haven’t really addressed. So, we see these very attractive performance numbers, but they’re often limited because of longevity. Over time, things will foul.”

    Many attempts at solar desalination systems rely on some kind of wick to draw the saline water through the device, but these wicks are vulnerable to salt accumulation and relatively difficult to clean. The team focused on developing a wick-free system instead. The result is a layered system, with dark material at the top to absorb the sun’s heat, then a thin layer of water above a perforated layer of material, sitting atop a deep reservoir of the salty water such as a tank or a pond. After careful calculations and experiments, the researchers determined the optimal size for the holes drilled through the perforated material, which in their tests was made of polyurethane. At 2.5 millimeters across, these holes can be easily made using commonly available waterjets.

    The holes are large enough to allow for a natural convective circulation between the warmer upper layer of water and the colder reservoir below. That circulation naturally draws the salt from the thin layer above down into the much larger body of water below, where it becomes well-diluted and no longer a problem. “It allows us to achieve high performance and yet also prevent this salt accumulation,” says Wang, who is the Ford Professor of Engineering and head of the Department of Mechanical Engineering.

    Li says that the advantages of this system are “both the high performance and the reliable operation, especially under extreme conditions, where we can actually work with near-saturation saline water. And that means it’s also very useful for wastewater treatment.”

    He adds that much work on such solar-powered desalination has focused on novel materials. “But in our case, we use really low-cost, almost household materials.” The key was analyzing and understanding the convective flow that drives this entirely passive system, he says. “People say you always need new materials, expensive ones, or complicated structures or wicking structures to do that. And this is, I believe, the first one that does this without wicking structures.”

    This new approach “provides a promising and efficient path for desalination of high salinity solutions, and could be a game changer in solar water desalination,” says Hadi Ghasemi, a professor of chemical and biomolecular engineering at the University of Houston, who was not associated with this work. “Further work is required for assessment of this concept in large settings and in long runs,” he adds.

    Just as hot air rises and cold air falls, Zhang explains, natural convection drives the desalination process in this device. In the confined water layer near the top, “the evaporation happens at the very top interface. Because of the salt, the density of water at the very top interface is higher, and the bottom water has lower density. So, this is an original driving force for this natural convection because the higher density at the top drives the salty liquid to go down.” The water evaporated from the top of the system can then be collected on a condensing surface, providing pure fresh water.

    The rejection of salt to the water below could also cause heat to be lost in the process, so preventing that required careful engineering, including making the perforated layer out of highly insulating material to keep the heat concentrated above. The solar heating at the top is accomplished through a simple layer of black paint.

    This gif shows fluid flow visualized by food dye. The left-side shows the slow transport of colored de-ionized water from the top to the bottom bulk water. The right-side shows the fast transport of colored saline water from the top to the bottom bulk water driven by the natural convection effect.

    So far, the team has proven the concept using small benchtop devices, so the next step will be starting to scale up to devices that could have practical applications. Based on their calculations, a system with just 1 square meter (about a square yard) of collecting area should be sufficient to provide a family’s daily needs for drinking water, they say. Zhang says they calculated that the necessary materials for a 1-square-meter device would cost only about $4.

    Their test apparatus operated for a week with no signs of any salt accumulation, Li says. And the device is remarkably stable. “Even if we apply some extreme perturbation, like waves on the seawater or the lake,” where such a device could be installed as a floating platform, “it can return to its original equilibrium position very fast,” he says.

    The necessary work to translate this lab-scale proof of concept into workable commercial devices, and to improve the overall water production rate, should be possible within a few years, Zhang says. The first applications are likely to be providing safe water in remote off-grid locations, or for disaster relief after hurricanes, earthquakes, or other disruptions of normal water supplies.

    Zhang adds that “if we can concentrate the sunlight a little bit, we could use this passive device to generate high-temperature steam to do medical sterilization” for off-grid rural areas.

    “I think a real opportunity is the developing world,” Wang says. “I think that is where there’s most probable impact near-term, because of the simplicity of the design.” But, she adds, “if we really want to get it out there, we also need to work with the end users, to really be able to adopt the way we design it so that they’re willing to use it.”

    “This is a new strategy toward solving the salt accumulation problem in solar evaporation,” says Peng Wang, a professor at King Abdullah University of Science and Technology in Saudi Arabia, who was not associated with this research. “This elegant design will inspire new innovations in the design of advanced solar evaporators. The strategy is very promising due to its high energy efficiency, operation durability, and low cost, which contributes to low-cost and passive water desalination to produce fresh water from various source water with high salinity, e.g., seawater, brine, or brackish groundwater.”

    The team also included Yang Zhong, Arny Leroy, and Lin Zhao at MIT, and Zhenyuan Xu at Shanghai Jiao Tong University in China. The work was supported by the Singapore-MIT Alliance for Research and Technology, the U.S.-Egypt Science and Technology Joint Fund, and used facilities supported by the National Science Foundation. More

  • in

    Reducing methane emissions at landfills

    The second-largest driver of global warming is methane, a greenhouse gas 28 times more potent than carbon dioxide. Landfills are a major source of methane, which is created when organic material decomposes underground.

    Now a startup that began at MIT is aiming to significantly reduce methane emissions from landfills with a system that requires no extra land, roads, or electric lines to work. The company, Loci Controls, has developed a solar-powered system that optimizes the collection of methane from landfills so more of it can be converted into natural gas.

    At the center of Loci’s (pronounced “low-sigh”) system is a lunchbox-sized device that attaches to methane collection wells, which vacuum the methane up to the surface for processing. The optimal vacuum force changes with factors like atmospheric pressure and temperature. Loci’s system monitors those factors and adjusts the vacuum force at each well far more frequently than is possible with field technicians making manual adjustments.

    “We expect to reduce methane emissions more than any other company in the world over the next five years,” Loci Controls CEO Peter Quigley ’85 says. The company was founded by Melinda Hale Sims SM ’09, PhD ’12 and Andrew Campanella ’05, SM ’13.

    The reason for Quigley’s optimism is the high concentration of landfill methane emissions. Most landfill emissions in the U.S. come from about 1,000 large dumps. Increasing collection of methane at those sites could make a significant dent in the country’s overall emissions.

    In one landfill where Loci’s system was installed, for instance, the company says it increased methane sales at an annual rate of 180,000 metric tons of carbon dioxide equivalent. That’s about the same as removing 40,000 cars from the road for a year.

    Loci’s system is currently installed on wells in 15 different landfills. Quigley says only about 70 of the 1,000 big landfills in the U.S. sell gas profitably. Most of the others burn the gas. But Loci’s team believes increasing public and regulatory pressure will help expands its potential customer base.

    Uncovering a major problem

    The idea for Loci came from a revelation by Sims’ father, serial entrepreneur Michael Hale SM ’85, PhD ’89. The elder Hale was working in wastewater management when he was contacted by a landfill in New York that wanted help using its excess methane gas.

    “He realized if he could help that particular landfill with the problem, it would apply to almost any landfill,” Sims says.

    At the time, Sims was pursuing her PhD in mechanical engineering at MIT and minoring in entrepreneurship.

    Her father didn’t have time to work on the project, but Sims began exploring technology solutions to improve methane capture at landfills in her business classes. The work was unrelated to her PhD, but her advisor, David Hardt, the Ralph E. and Eloise F. Cross Professor in Manufacturing at MIT, was understanding. (Hardt had also served as PhD advisor for Sim’s father, who was, after all, the person to blame for Sim’s new side project.)

    Sims partnered with Andrew Campanella, then a master’s student focused on electrical engineering, and the two went through the delta v summer accelerator program hosted by the Martin Trust Center for MIT Entrepreneurship.

    Quigley was retired but serving on multiple visiting committees at MIT when he began mentoring Loci’s founders. He’d spent his career commercializing reinforced plastic through two companies, one in the high-performance sporting goods industry and the other in oil field services.

    “What captured my imagination was the emissions-reduction opportunity,” Quigley says.

    Methane is generated in landfills when organic waste decomposes. Some landfill operators capture the methane by drilling hundreds of collection wells. The vacuum pressure of those wells needs to be adjusted to maximize the amount of methane collected, but Quigley says technicians can only make those adjustments manually about once a month.

    Loci’s devices monitor gas composition, temperature, and environmental factors like barometric pressure to optimize vacuum power every hour. The data the controllers collect is aggregated in an analytics platform for technicians to monitor remotely. That data can also be used to pinpoint well failure events, such as flooding during rain, and otherwise improve operations to increase the amount of methane captured.

    “We can adjust the valves automatically, but we also have data that allows on-site operators to identify and remedy problems much more quickly,” Quigley explains.

    Furthering a high-impact mission

    Methane capture at landfills is becoming more urgent as improvements in detection technologies are revealing discrepancies between methane emission estimates and reality in the industry. A new airborne methane sensor deployed by NASA, for instance, found that California landfills have been leaking methane at rates as much as six times greater than estimates from the U.S. Environmental Protection Agency. The difference has major implications for the Earth’s atmosphere.

    A reckoning will have to occur to motivate more waste management companies to start collecting methane and to optimize methane capture. That could come in the form of new collection standards or an increased emphasis on methane collection from investors. (Funds controlled by billionaires Bill Gates and Larry Fink are major investors in waste management companies.)

    For now, Loci’s team, including co-founder and current senior advisor Sims, believes it’s on the road to making a meaningful impact under current market conditions.

    “When I was in grad school, the majority of the focus on emissions was on CO2,” Sims says. “I think methane is a really high-impact place to be focused, and I think it’s been underestimated how valuable it could be to apply technology to the industry.” More