More stories

  • in

    Study: Fusion energy could play a major role in the global response to climate change

    For many decades, fusion has been touted as the ultimate source of abundant, clean electricity. Now, as the world faces the need to reduce carbon emissions to prevent catastrophic climate change, making commercial fusion power a reality takes on new importance. In a power system dominated by low-carbon variable renewable energy sources (VREs) such as solar and wind, “firm” electricity sources are needed to kick in whenever demand exceeds supply — for example, when the sun isn’t shining or the wind isn’t blowing and energy storage systems aren’t up to the task. What is the potential role and value of fusion power plants (FPPs) in such a future electric power system — a system that is not only free of carbon emissions but also capable of meeting the dramatically increased global electricity demand expected in the coming decades?Working together for a year-and-a-half, investigators in the MIT Energy Initiative (MITEI) and the MIT Plasma Science and Fusion Center (PSFC) have been collaborating to answer that question. They found that — depending on its future cost and performance — fusion has the potential to be critically important to decarbonization. Under some conditions, the availability of FPPs could reduce the global cost of decarbonizing by trillions of dollars. More than 25 experts together examined the factors that will impact the deployment of FPPs, including costs, climate policy, operating characteristics, and other factors. They present their findings in a new report funded through MITEI and entitled “The Role of Fusion Energy in a Decarbonized Electricity System.”“Right now, there is great interest in fusion energy in many quarters — from the private sector to government to the general public,” says the study’s principal investigator (PI) Robert C. Armstrong, MITEI’s former director and the Chevron Professor of Chemical Engineering, Emeritus. “In undertaking this study, our goal was to provide a balanced, fact-based, analysis-driven guide to help us all understand the prospects for fusion going forward.” Accordingly, the study takes a multidisciplinary approach that combines economic modeling, electric grid modeling, techno-economic analysis, and more to examine important factors that are likely to shape the future deployment and utilization of fusion energy. The investigators from MITEI provided the energy systems modeling capability, while the PSFC participants provided the fusion expertise.Fusion technologies may be a decade away from commercial deployment, so the detailed technology and costs of future commercial FPPs are not known at this point. As a result, the MIT research team focused on determining what cost levels fusion plants must reach by 2050 to achieve strong market penetration and make a significant contribution to the decarbonization of global electricity supply in the latter half of the century.The value of having FPPs available on an electric grid will depend on what other options are available, so to perform their analyses, the researchers needed estimates of the future cost and performance of those options, including conventional fossil fuel generators, nuclear fission power plants, VRE generators, and energy storage technologies, as well as electricity demand for specific regions of the world. To find the most reliable data, they searched the published literature as well as results of previous MITEI and PSFC analyses.Overall, the analyses showed that — while the technology demands of harnessing fusion energy are formidable — so are the potential economic and environmental payoffs of adding this firm, low-carbon technology to the world’s portfolio of energy options.Perhaps the most remarkable finding is the “societal value” of having commercial FPPs available. “Limiting warming to 1.5 degrees C requires that the world invest in wind, solar, storage, grid infrastructure, and everything else needed to decarbonize the electric power system,” explains Randall Field, executive director of the fusion study and MITEI’s director of research. “The cost of that task can be far lower when FPPs are available as a source of clean, firm electricity.” And the benefit varies depending on the cost of the FPPs. For example, assuming that the cost of building a FPP is $8,000 per kilowatt (kW) in 2050 and falls to $4,300/kW in 2100, the global cost of decarbonizing electric power drops by $3.6 trillion. If the cost of a FPP is $5,600/kW in 2050 and falls to $3,000/kW in 2100, the savings from having the fusion plants available would be $8.7 trillion. (Those calculations are based on differences in global gross domestic product and assume a discount rate of 6 percent. The undiscounted value is about 20 times larger.)The goal of other analyses was to determine the scale of deployment worldwide at selected FPP costs. Again, the results are striking. For a deep decarbonization scenario, the total global share of electricity generation from fusion in 2100 ranges from less than 10 percent if the cost of fusion is high to more than 50 percent if the cost of fusion is low.Other analyses showed that the scale and timing of fusion deployment vary in different parts of the world. Early deployment of fusion can be expected in wealthy nations such as European countries and the United States that have the most aggressive decarbonization policies. But certain other locations — for example, India and the continent of Africa — will have great growth in fusion deployment in the second half of the century due to a large increase in demand for electricity during that time. “In the U.S. and Europe, the amount of demand growth will be low, so it’ll be a matter of switching away from dirty fuels to fusion,” explains Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy and a senior research scientist at MITEI. “But in India and Africa, for example, the tremendous growth in overall electricity demand will be met with significant amounts of fusion along with other low-carbon generation resources in the later part of the century.”A set of analyses focusing on nine subregions of the United States showed that the availability and cost of other low-carbon technologies, as well as how tightly carbon emissions are constrained, have a major impact on how FPPs would be deployed and used. In a decarbonized world, FPPs will have the highest penetration in locations with poor diversity, capacity, and quality of renewable resources, and limits on carbon emissions will have a big impact. For example, the Atlantic and Southeast subregions have low renewable resources. In those subregions, wind can produce only a small fraction of the electricity needed, even with maximum onshore wind buildout. Thus, fusion is needed in those subregions, even when carbon constraints are relatively lenient, and any available FPPs would be running much of the time. In contrast, the Central subregion of the United States has excellent renewable resources, especially wind. Thus, fusion competes in the Central subregion only when limits on carbon emissions are very strict, and FPPs will typically be operated only when the renewables can’t meet demand.An analysis of the power system that serves the New England states provided remarkably detailed results. Using a modeling tool developed at MITEI, the fusion team explored the impact of using different assumptions about not just cost and emissions limits but even such details as potential land-use constraints affecting the use of specific VREs. This approach enabled them to calculate the FPP cost at which fusion units begin to be installed. They were also able to investigate how that “threshold” cost changed with changes in the cap on carbon emissions. The method can even show at what price FPPs begin to replace other specific generating sources. In one set of runs, they determined the cost at which FPPs would begin to displace floating platform offshore wind and rooftop solar.“This study is an important contribution to fusion commercialization because it provides economic targets for the use of fusion in the electricity markets,” notes Dennis G. Whyte, co-PI of the fusion study, former director of the PSFC, and the Hitachi America Professor of Engineering in the Department of Nuclear Science and Engineering. “It better quantifies the technical design challenges for fusion developers with respect to pricing, availability, and flexibility to meet changing demand in the future.”The researchers stress that while fission power plants are included in the analyses, they did not perform a “head-to-head” comparison between fission and fusion, because there are too many unknowns. Fusion and nuclear fission are both firm, low-carbon electricity-generating technologies; but unlike fission, fusion doesn’t use fissile materials as fuels, and it doesn’t generate long-lived nuclear fuel waste that must be managed. As a result, the regulatory requirements for FPPs will be very different from the regulations for today’s fission power plants — but precisely how they will differ is unclear. Likewise, the future public perception and social acceptance of each of these technologies cannot be projected, but could have a major influence on what generation technologies are used to meet future demand.The results of the study convey several messages about the future of fusion. For example, it’s clear that regulation can be a potentially large cost driver. This should motivate fusion companies to minimize their regulatory and environmental footprint with respect to fuels and activated materials. It should also encourage governments to adopt appropriate and effective regulatory policies to maximize their ability to use fusion energy in achieving their decarbonization goals. And for companies developing fusion technologies, the study’s message is clearly stated in the report: “If the cost and performance targets identified in this report can be achieved, our analysis shows that fusion energy can play a major role in meeting future electricity needs and achieving global net-zero carbon goals.” More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More

  • in

    Affordable high-tech windows for comfort and energy savings

    Imagine if the windows of your home didn’t transmit heat. They’d keep the heat indoors in winter and outdoors on a hot summer’s day. Your heating and cooling bills would go down; your energy consumption and carbon emissions would drop; and you’d still be comfortable all year ’round.AeroShield, a startup spun out of MIT, is poised to start manufacturing such windows. Building operations make up 36 percent of global carbon dioxide emissions, and today’s windows are a major contributor to energy inefficiency in buildings. To improve building efficiency, AeroShield has developed a window technology that promises to reduce heat loss by up to 65 percent, significantly reducing energy use and carbon emissions in buildings, and the company just announced the opening of a new facility to manufacture its breakthrough energy-efficient windows.“Our mission is to decarbonize the built environment,” says Elise Strobach SM ’17, PhD ’20, co-founder and CEO of AeroShield. “The availability of affordable, thermally insulating windows will help us achieve that goal while also reducing homeowner’s heating and cooling bills.” According to the U.S. Department of Energy, for most homeowners, 30 percent of that bill results from window inefficiencies.Technology development at MITResearch on AeroShield’s window technology began a decade ago in the MIT lab of Evelyn Wang, Ford Professor of Engineering, now on leave to serve as director of the Advanced Research Projects Agency-Energy (ARPA-E). In late 2014, the MIT team received funding from ARPA-E, and other sponsors followed, including the MIT Energy Initiative through the MIT Tata Center for Technology and Design in 2016.The work focused on aerogels, remarkable materials that are ultra-porous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow. Aerogels were invented in the 1930s and used by NASA and others as thermal insulation. The team at MIT saw the potential for incorporating aerogel sheets into windows to keep heat from escaping or entering buildings. But there was one problem: Nobody had been able to make aerogels transparent.An aerogel is made of transparent, loosely connected nanoscale silica particles and is 95 percent air. But an aerogel sheet isn’t transparent because light traveling through it gets scattered by the silica particles.After five years of theoretical and experimental work, the MIT team determined that the key to transparency was having the silica particles both small and uniform in size. This allows light to pass directly through, so the aerogel becomes transparent. Indeed, as long as the particle size is small and uniform, increasing the thickness of an aerogel sheet to achieve greater thermal insulation won’t make it less clear.Teams in the MIT lab looked at various applications for their super-insulating, transparent aerogels. Some focused on improving solar thermal collectors by making the systems more efficient and less expensive. But to Strobach, increasing the thermal efficiency of windows looked especially promising and potentially significant as a means of reducing climate change.The researchers determined that aerogel sheets could be inserted into the gap in double-pane windows, making them more than twice as insulating. The windows could then be manufactured on existing production lines with minor changes, and the resulting windows would be affordable and as wide-ranging in style as the window options available today. Best of all, once purchased and installed, the windows would reduce electricity bills, energy use, and carbon emissions.The impact on energy use in buildings could be considerable. “If we only consider winter, windows in the United States lose enough energy to power over 50 million homes,” says Strobach. “That wasted energy generates about 350 million tons of carbon dioxide — more than is emitted by 76 million cars.” Super-insulating windows could help home and building owners reduce carbon dioxide emissions by gigatons while saving billions in heating and cooling costs.The AeroShield storyIn 2019, Strobach and her MIT colleagues — Aaron Baskerville-Bridges MBA ’20, SM ’20 and Kyle Wilke PhD ’19 — co-founded AeroShield to further develop and commercialize their aerogel-based technology for windows and other applications. And in the subsequent five years, their hard work has attracted attention, recently leading to two major accomplishments.In spring 2024, the company announced the opening of its new pilot manufacturing facility in Waltham, Massachusetts, where the team will be producing, testing, and certifying their first full-size windows and patio doors for initial product launch. The 12,000 square foot facility will significantly expand the company’s capabilities, with cutting-edge aerogel R&D labs, manufacturing equipment, assembly lines, and testing equipment. Says Strobach, “Our pilot facility will supply window and door manufacturers as we launch our first products and will also serve as our R&D headquarters as we develop the next generation of energy-efficient products using transparent aerogels.”Also in spring 2024, AeroShield received a $14.5 million award from ARPA-E’s “Seeding Critical Advances for Leading Energy technologies with Untapped Potential” (SCALEUP) program, which provides new funding to previous ARPA-E awardees that have “demonstrated a viable path to market.” That funding will enable the company to expand its production capacity to tens of thousands, or even hundreds of thousands, of units per year.Strobach also cites two less-obvious benefits of the SCALEUP award.First, the funding is enabling the company to move more quickly on the scale-up phase of their technology development. “We know from our fundamental studies and lab experiments that we can make large-area aerogel sheets that could go in an entry or patio door,” says Elise. “The SCALEUP award allows us to go straight for that vision. We don’t have to do all the incremental sizes of aerogels to prove that we can make a big one. The award provides capital for us to buy the big equipment to make the big aerogel.”Second, the SCALEUP award confirms the viability of the company to other potential investors and collaborators. Indeed, AeroShield recently announced $5 million of additional funding from existing investors Massachusetts Clean Energy Center and MassVentures, as well as new investor MassMutual Ventures. Strobach notes that the company now has investor, engineering, and customer partners.She stresses the importance of partners in achieving AeroShield’s mission. “We know that what we’ve got from a fundamental perspective can change the industry,” she says. “Now we want to go out and do it. With the right partners and at the right pace, we may actually be able to increase the energy efficiency of our buildings early enough to help make a real dent in climate change.” More

  • in

    MIT students combat climate anxiety through extracurricular teams

    Climate anxiety affects nearly half of young people aged 16-25. Students like second-year Rachel Mohammed find hope and inspiration through her involvement in innovative climate solutions, working alongside peers who share her determination. “I’ve met so many people at MIT who are dedicated to finding climate solutions in ways that I had never imagined, dreamed of, or heard of. That is what keeps me going, and I’m doing my part,” she says.Hydrogen-fueled enginesHydrogen offers the potential for zero or near-zero emissions, with the ability to reduce greenhouse gases and pollution by 29 percent. However, the hydrogen industry faces many challenges related to storage solutions and costs.Mohammed leads the hydrogen team on MIT’s Electric Vehicle Team (EVT), which is dedicated to harnessing hydrogen power to build a cleaner, more sustainable future. EVT is one of several student-led build teams at the Edgerton Center focused on innovative climate solutions. Since its founding in 1992, the Edgerton Center has been a hub for MIT students to bring their ideas to life.Hydrogen is mostly used in large vehicles like trucks and planes because it requires a lot of storage space. EVT is building their second iteration of a motorcycle based on what Mohammed calls a “goofy hypothesis” that you can use hydrogen to power a small vehicle. The team employs a hydrogen fuel cell system, which generates electricity by combining hydrogen with oxygen. However, the technology faces challenges, particularly in storage, which EVT is tackling with innovative designs for smaller vehicles.Presenting at the 2024 World Hydrogen Summit reaffirmed Mohammed’s confidence in this project. “I often encounter skepticism, with people saying it’s not practical. Seeing others actively working on similar initiatives made me realize that we can do it too,” Mohammed says.The team’s first successful track test last October allowed them to evaluate the real-world performance of their hydrogen-powered motorcycle, marking a crucial step in proving the feasibility and efficiency of their design.MIT’s Sustainable Engine Team (SET), founded by junior Charles Yong, uses the combustion method to generate energy with hydrogen. This is a promising technology route for high-power-density applications, like aviation, but Yong believes it hasn’t received enough attention. Yong explains, “In the hydrogen power industry, startups choose fuel cell routes instead of combustion because gas turbine industry giants are 50 years ahead. However, these giants are moving very slowly toward hydrogen due to its not-yet-fully-developed infrastructure. Working under the Edgerton Center allows us to take risks and explore advanced tech directions to demonstrate that hydrogen combustion can be readily available.”Both EVT and SET are publishing their research and providing detailed instructions for anyone interested in replicating their results.Running on sunshineThe Solar Electric Vehicle Team powers a car built from scratch with 100 percent solar energy.The team’s single-occupancy car Nimbus won the American Solar Challenge two years in a row. This year, the team pushed boundaries further with Gemini, a multiple-occupancy vehicle that challenges conventional perceptions of solar-powered cars.Senior Andre Greene explains, “the challenge comes from minimizing how much energy you waste because you work with such little energy. It’s like the equivalent power of a toaster.”Gemini looks more like a regular car and less like a “spaceship,” as NBC’s 1st Look affectionately called Nimbus. “It more resembles what a fully solar-powered car could look like versus the single-seaters. You don’t see a lot of single-seater cars on the market, so it’s opening people’s minds,” says rising junior Tessa Uviedo, team captain.All-electric since 2013The MIT Motorsports team switched to an all-electric powertrain in 2013. Captain Eric Zhou takes inspiration from China, the world’s largest market for electric vehicles. “In China, there is a large government push towards electric, but there are also five or six big companies almost as large as Tesla size, building out these electric vehicles. The competition drives the majority of vehicles in China to become electric.”The team is also switching to four-wheel drive and regenerative braking next year, which reduces the amount of energy needed to run. “This is more efficient and better for power consumption because the torque from the motors is applied straight to the tires. It’s more efficient than having a rear motor that must transfer torque to both rear tires. Also, you’re taking advantage of all four tires in terms of producing grip, while you can only rely on the back tires in a rear-wheel-drive car,” Zhou says.Zhou adds that Motorsports wants to help prepare students for the electric vehicle industry. “A large majority of upperclassmen on the team have worked, or are working, at Tesla or Rivian.”Former Motorsports powertrain lead Levi Gershon ’23, SM ’24 recently founded CRABI Robotics — a fully autonomous marine robotic system designed to conduct in-transit cleaning of marine vessels by removing biofouling, increasing vessels’ fuel efficiency.An Indigenous approach to sustainable rocketsFirst Nations Launch, the all-Indigenous student rocket team, recently won the Grand Prize in the 2024 NASA First Nations Launch High-Power Rocket Competition. Using Indigenous methodologies, this team considers the environment in the materials and methods they employ.“The environmental impact is always something that we consider when we’re making design decisions and operational decisions. We’ve thought about things like biodegradable composites and parachutes,” says rising junior Hailey Polson, team captain. “Aerospace has been a very wasteful industry in the past. There are huge leaps and bounds being made with forward progress in regard to reusable rockets, which is definitely lowering the environmental impact.”Collecting climate change data with autonomous boatsArcturus, the recent first-place winner in design at the 16th Annual RoboBoat Competition, is developing autonomous surface vehicles that can greatly aid in marine research. “The ocean is one of our greatest resources to combat climate change; thus, the accessibility of data will help scientists understand climate patterns and predict future trends. This can help people learn how to prepare for potential disasters and how to reduce each of our carbon footprints,” says Arcturus captain and rising junior Amy Shi.“We are hoping to expand our outreach efforts to incorporate more sustainability-related programs. This can include more interactions with local students to introduce them to how engineering can make a positive impact in the climate space or other similar programs,” Shi says.Shi emphasizes that hope is a crucial force in the battle against climate change. “There are great steps being taken every day to combat this seemingly impending doom we call the climate crisis. It’s important to not give up hope, because this hope is what’s driving the leaps and bounds of innovation happening in the climate community. The mainstream media mostly reports on the negatives, but the truth is there is a lot of positive climate news every day. Being more intentional about where you seek your climate news can really help subside this feeling of doom about our planet.” More

  • in

    Tracking emissions to help companies reduce their environmental footprint

    Amidst a global wave of corporate pledges to decarbonize or reach net-zero emissions, a system for verifying actual greenhouse gas reductions has never been more important. Context Labs, founded by former MIT Sloan Fellow and serial entrepreneur Dan Harple SM ’13, is rising to meet that challenge with an analytics platform that brings more transparency to emissions data.The company’s platform adds context to data from sources like equipment sensors and satellites, provides third-party verification, and records all that information on a blockchain. Context Labs also provides an interactive view of emissions across every aspect of a company’s operations, allowing leaders to pinpoint the dirtiest parts of their business.“There’s an old adage: Unless you measure something, you can’t change it,” says Harple, who is the firm’s CEO. “I think of what we’re doing as an AI-driven digital lens into what’s happening across organizations. Our goal is to help the planet get better, faster.”Context Labs is already working with some of the largest energy companies in the world — including EQT, Williams Companies, and Coterra Energy — to verify emissions reductions. A partnership with Microsoft, announced at last year’s COP28 United Nations climate summit, allows any organization on Microsoft’s Azure cloud to integrate their sensor data into Context Lab’s platform to get a granular view of their environmental impact.Harple says the progress enables more informed sustainability initiatives at scale. He also sees the work as a way to combat overly vague statements about sustainable practices that don’t lead to actual emissions reductions, or what’s known as “greenwashing.”“Just producing data isn’t good enough, and our customers realize that, because they know even if they have good intentions to reduce emissions, no one is going to believe them,” Harple says. “One way to think about our platform is as antigreenwashing insurance, because if you get attacked for your emissions, we unbundle the data like it’s in shrink-wrap and roll it back through time on the blockchain. You can click on it and see exactly where and how it was measured, monitored, timestamped, its serial number, everything. It’s really the gold standard of proof.”An unconventional master’sHarple came to MIT as a serial founder whose companies had pioneered several foundational internet technologies, including real-time video streaming technology still used in applications like Zoom and Netflix, as well as some of the core technology for the popular Chinese microblogging website Weibo.Harple’s introduction to MIT started with a paper he wrote for his venture capital contacts in the U.S. to make the case for investment in the Netherlands, where he was living with his family. The paper caught the attention of MIT Professor Stuart Madnick, the John Norris Maguire Professor of Information Technology at the MIT Sloan School of Management, who suggested Harple come to MIT as a Sloan Fellow to further develop his ideas about what makes a strong innovation ecosystem.Having successfully founded and exited multiple companies, Harple was not a typical MIT student when he began the Sloan Fellows program in 2011. At one point, he held a summit at MIT for a group of leading Dutch entrepreneurs and government officials that included tours of major labs and a meeting with former MIT President L. Rafael Reif.“Everyone was super enamored with MIT, and that kicked off what became a course that I started at MIT called REAL, Regional Entrepreneurial Acceleration Lab,” Harple says. REAL was eventually absorbed by what is now REAP — the Regional Entrepreneurship Acceleration Program, which has worked with communities around the world.Harple describes REAL as a framework vehicle to put his theories on supporting innovation into action. Over his time at MIT, which also included collaborating with the Media Lab, he systematized those theories into what he calls pentalytics, which is a way to measure and predict the resilience of innovation ecosystems.“My sense was MIT should be analytical and data-driven,” Harple says. “The thesis I wrote was a framework for AI-driven network graph analytics. So, you can model things using analytics, and you can use AI to do predictive analytics to see where the innovation ecosystem is going to thrive.”Once Harple’s pentalytics theory was established, he wanted to put it to the test with a company. His initial idea for Context Labs was to build a verification platform to combat fake news, deepfakes, and other misinformation on the internet. Around 2018, Harple met climate investor Jeremy Grantham, who he says helped him realize the most important data are about the planet. Harple began to believe that U.S. Environmental Protection Agency (EPA) emissions estimates for things like driving a car or operating an oil rig were just that — estimates — and left room for improvement.“Our approach was very MIT-ish,” Harple says. “We said, ‘Let’s, measure it and let’s monitor it, and then let’s contextualize that data so you can never go back and say they faked it. I think there’s a lot of fakery that’s happened, and that’s why the voluntary carbon markets cratered in the last year. Our view is they cratered because the data wasn’t empirical enough.”Context Labs’ solution starts with a technology platform it calls Immutably that continuously combines disparate data streams, encrypts that information, and records it on a blockchain. Immutably also verifies the information with one or more third parties. (Context Labs has partnered with the global accounting firm KPMG.)On top of Immutably, Context Labs has built applications, including a product called Decarbonization-as-a-Service (DaaS), which uses Immutably’s data to give companies a digital twin of their entire operations. Customers can use DaaS to explore the emissions of their assets and create a certificate of verified CO2-equivalent emissions, which can be used in carbon credit markets.Putting emissions data into contextContext Labs is working with oil and gas companies, utilities, data centers, and large industrial operators, some using the platform to analyze more than 3 billion data points each day. For instance, EQT, the largest natural gas producer in the U.S., uses Context Labs to verify its lower-emission products and create carbon credits. Other customers include the nonprofits Rocky Mountain Institute and the Environmental Defense Fund.“I often get asked how big the total addressable market is,” Harple says. “My view is it’s the largest market in history. Why? Because every country needs a decarbonization plan, along with instrumentation and a digital platform to execute, as does every company.”With its headquarters in Kendall Square in Cambridge, Massachusetts, Context Labs is also serving as a test for Harple’s pentalytics theory for innovation ecosystems. It also has operations in Houston and Amsterdam.“This company is a living lab for pentalytics,” Harple says. “I believe Kendall Square 1.0 was factory buildings, Kendall Square 2.0 is biotech, and Kendall Square 3.0 will be climate tech.” More

  • in

    Scientists find a human “fingerprint” in the upper troposphere’s increasing ozone

    Ozone can be an agent of good or harm, depending on where you find it in the atmosphere. Way up in the stratosphere, the colorless gas shields the Earth from the sun’s harsh ultraviolet rays. But closer to the ground, ozone is a harmful air pollutant that can trigger chronic health problems including chest pain, difficulty breathing, and impaired lung function.And somewhere in between, in the upper troposphere — the layer of the atmosphere just below the stratosphere, where most aircraft cruise — ozone contributes to warming the planet as a potent greenhouse gas.There are signs that ozone is continuing to rise in the upper troposphere despite efforts to reduce its sources at the surface in many nations. Now, MIT scientists confirm that much of ozone’s increase in the upper troposphere is likely due to humans.In a paper appearing today in the journal Environmental Science and Technology, the team reports that they detected a clear signal of human influence on upper tropospheric ozone trends in a 17-year satellite record starting in 2005.“We confirm that there’s a clear and increasing trend in upper tropospheric ozone in the northern midlatitudes due to human beings rather than climate noise,” says study lead author Xinyuan Yu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).“Now we can do more detective work and try to understand what specific human activities are leading to this ozone trend,” adds co-author Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences.The study’s MIT authors include Sebastian Eastham and Qindan Zhu, along with Benjamin Santer at the University of California at Los Angeles, Gustavo Correa of Columbia University, Jean-François Lamarque at the National Center for Atmospheric Research, and Jerald Zimeke at NASA Goddard Space Flight Center.Ozone’s tangled webUnderstanding ozone’s causes and influences is a challenging exercise. Ozone is not emitted directly, but instead is a product of “precursors” — starting ingredients, such as nitrogen oxides and volatile organic compounds (VOCs), that react in the presence of sunlight to form ozone. These precursors are generated from vehicle exhaust, power plants, chemical solvents, industrial processes, aircraft emissions, and other human-induced activities.Whether and how long ozone lingers in the atmosphere depends on a tangle of variables, including the type and extent of human activities in a given area, as well as natural climate variability. For instance, a strong El Niño year could nudge the atmosphere’s circulation in a way that affects ozone’s concentrations, regardless of how much ozone humans are contributing to the atmosphere that year.Disentangling the human- versus climate-driven causes of ozone trend, particularly in the upper troposphere, is especially tricky. Complicating matters is the fact that in the lower troposphere — the lowest layer of the atmosphere, closest to ground level — ozone has stopped rising, and has even fallen in some regions at northern midlatitudes in the last few decades. This decrease in lower tropospheric ozone is mainly a result of efforts in North America and Europe to reduce industrial sources of air pollution.“Near the surface, ozone has been observed to decrease in some regions, and its variations are more closely linked to human emissions,” Yu notes. “In the upper troposphere, the ozone trends are less well-monitored but seem to decouple with those near the surface, and ozone is more easily influenced by climate variability. So, we don’t know whether and how much of that increase in observed ozone in the upper troposphere is attributed to humans.”A human signal amid climate noiseYu and Fiore wondered whether a human “fingerprint” in ozone levels, caused directly by human activities, could be strong enough to be detectable in satellite observations in the upper troposphere. To see such a signal, the researchers would first have to know what to look for.For this, they looked to simulations of the Earth’s climate and atmospheric chemistry. Following approaches developed in climate science, they reasoned that if they could simulate a number of possible climate variations in recent decades, all with identical human-derived sources of ozone precursor emissions, but each starting with a slightly different climate condition, then any differences among these scenarios should be due to climate noise. By inference, any common signal that emerged when averaging over the simulated scenarios should be due to human-driven causes. Such a signal, then, would be a “fingerprint” revealing human-caused ozone, which the team could look for in actual satellite observations.With this strategy in mind, the team ran simulations using a state-of-the-art chemistry climate model. They ran multiple climate scenarios, each starting from the year 1950 and running through 2014.From their simulations, the team saw a clear and common signal across scenarios, which they identified as a human fingerprint. They then looked to tropospheric ozone products derived from multiple instruments aboard NASA’s Aura satellite.“Quite honestly, I thought the satellite data were just going to be too noisy,” Fiore admits. “I didn’t expect that the pattern would be robust enough.”But the satellite observations they used gave them a good enough shot. The team looked through the upper tropospheric ozone data derived from the satellite products, from the years 2005 to 2021, and found that, indeed, they could see the signal of human-caused ozone that their simulations predicted. The signal is especially pronounced over Asia, where industrial activity has risen significantly in recent decades and where abundant sunlight and frequent weather events loft pollution, including ozone and its precursors, to the upper troposphere.Yu and Fiore are now looking to identify the specific human activities that are leading to ozone’s increase in the upper troposphere.“Where is this increasing trend coming from? Is it the near-surface emissions from combusting fossil fuels in vehicle engines and power plants? Is it the aircraft that are flying in the upper troposphere? Is it the influence of wildland fires? Or some combination of all of the above?” Fiore says. “Being able to separate human-caused impacts from natural climate variations can help to inform strategies to address climate change and air pollution.”This research was funded, in part, by NASA. More

  • in

    China-based emissions of three potent climate-warming greenhouse gases spiked in past decade

    When it comes to heating up the planet, not all greenhouse gases are created equal. They vary widely in their global warming potential (GWP), a measure of how much infrared thermal radiation a greenhouse gas would absorb over a given time frame once it enters the atmosphere. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO2), and the GWPs of a class of greenhouse gases known as perfluorocarbons (PFCs) are thousands of times that of CO2. The lifespans in the atmosphere of different greenhouse gases also vary widely. Methane persists in the atmosphere for around 10 years; CO2 for over 100 years, and PFCs for up to tens of thousands of years.Given the high GWPs and lifespans of PFCs, their emissions could pose a major roadblock to achieving the aspirational goal of the Paris Agreement on climate change — to limit the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels. Now, two new studies based on atmospheric observations inside China and high-resolution atmospheric models show a rapid rise in Chinese emissions over the last decade (2011 to 2020 or 2021) of three PFCs: tetrafluoromethane (PFC-14) and hexafluoroethane (PFC-116) (results in PNAS), and perfluorocyclobutane (PFC-318) (results in Environmental Science & Technology).Both studies find that Chinese emissions have played a dominant role in driving up global emission levels for all three PFCs.The PNAS study identifies substantial PFC-14 and PFC-116 emission sources in the less-populated western regions of China from 2011 to 2021, likely due to the large amount of aluminum industry in these regions. The semiconductor industry also contributes to some of the emissions detected in the more economically developed eastern regions. These emissions are byproducts from aluminum smelting, or occur during the use of the two PFCs in the production of semiconductors and flat panel displays. During the observation period, emissions of both gases in China rose by 78 percent, accounting for most of the increase in global emissions of these gases.The ES&T study finds that during 2011-20, a 70 percent increase in Chinese PFC-318 emissions (contributing more than half of the global emissions increase of this gas) — originated primarily in eastern China. The regions with high emissions of PFC-318 in China overlap with geographical areas densely populated with factories that produce polytetrafluoroethylene (PTFE, commonly used for nonstick cookware coatings), implying that PTFE factories are major sources of PFC-318 emissions in China. In these factories, PFC-318 is formed as a byproduct.“Using atmospheric observations from multiple monitoring sites, we not only determined the magnitudes of PFC emissions, but also pinpointed the possible locations of their sources,” says Minde An, a postdoc at the MIT Center for Global Change Science (CGCS), and corresponding author of both studies. “Identifying the actual source industries contributing to these PFC emissions, and understanding the reasons for these largely byproduct emissions, can provide guidance for developing region- or industry-specific mitigation strategies.”“These three PFCs are largely produced as unwanted byproducts during the manufacture of otherwise widely used industrial products,” says MIT professor of atmospheric sciences Ronald Prinn, director of both the MIT Joint Program on the Science and Policy of Global Change and CGCS, and a co-author of both studies. “Phasing out emissions of PFCs as early as possible is highly beneficial for achieving global climate mitigation targets and is likely achievable by recycling programs and targeted technological improvements in these industries.”Findings in both studies were obtained, in part, from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment (AGAGE) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, as reported in the latest United Nations Environment Program and World Meteorological Organization Ozone Assessment report. More

  • in

    Q&A: What past environmental success can teach us about solving the climate crisis

    Susan Solomon, MIT professor of Earth, atmospheric, and planetary sciences (EAPS) and of chemistry, played a critical role in understanding how a class of chemicals known as chlorofluorocarbons were creating a hole in the ozone layer. Her research was foundational to the creation of the Montreal Protocol, an international agreement established in the 1980s that phased out products releasing chlorofluorocarbons. Since then, scientists have documented signs that the ozone hole is recovering thanks to these measures.Having witnessed this historical process first-hand, Solomon, the Lee and Geraldine Martin Professor of Environmental Studies, is aware of how people can come together to make successful environmental policy happen. Using her story, as well as other examples of success — including combating smog, getting rid of DDT, and more — Solomon draws parallels from then to now as the climate crisis comes into focus in her new book, “Solvable: How we Healed the Earth and How we can do it Again.”Solomon took a moment to talk about why she picked the stories in her book, the students who inspired her, and why we need hope and optimism now more than ever.Q: You have first-hand experience seeing how we’ve altered the Earth, as well as the process of creating international environmental policy. What prompted you to write a book about your experiences?A: Lots of things, but one of the main ones is the things that I see in teaching. I have taught a class called Science, Politics and Environmental Policy for many years here at MIT. Because my emphasis is always on how we’ve actually fixed problems, students come away from that class feeling hopeful, like they really want to stay engaged with the problem.It strikes me that students today have grown up in a very contentious and difficult era in which they feel like nothing ever gets done. But stuff does get done, even now. Looking at how we did things so far really helps you to see how we can do things in the future.Q: In the book, you use five different stories as examples of successful environmental policy, and then end talking about how we can apply these lessons to climate change. Why did you pick these five stories?A: I picked some of them because I’m closer to those problems in my own professional experience, like ozone depletion and smog. I did other issues partly because I wanted to show that even in the 21st century, we’ve actually got some stuff done — that’s the story of the Kigali Amendment to the Montreal Protocol, which is a binding international agreement on some greenhouse gases.Another chapter is on DDT. One of the reasons I included that is because it had an enormous effect on the birth of the environmental movement in the United States. Plus, that story allows you to see how important the environmental groups can be.Lead in gasoline and paint is the other one. I find it a very moving story because the idea that we were poisoning millions of children and not even realizing it is so very, very sad. But it’s so uplifting that we did figure out the problem, and it happened partly because of the civil rights movement, that made us aware that the problem was striking minority communities much more than non-minority communities.Q: What surprised you the most during your research for the book?A: One of the things that that I didn’t realize and should have, was the outsized role played by one single senator, Ed Muskie of Maine. He made pollution control his big issue and devoted incredible energy to it. He clearly had the passion and wanted to do it for many years, but until other factors helped him, he couldn’t. That’s where I began to understand the role of public opinion and the way in which policy is only possible when public opinion demands change.Another thing about Muskie was the way in which his engagement with these issues demanded that science be strong. When I read what he put into congressional testimony I realized how highly he valued the science. Science alone is never enough, but it’s always necessary. Over the years, science got a lot stronger, and we developed ways of evaluating what the scientific wisdom across many different studies and many different views actually is. That’s what scientific assessment is all about, and it’s crucial to environmental progress.Q: Throughout the book you argue that for environmental action to succeed, three things must be met which you call the three Ps: a threat much be personal, perceptible, and practical. Where did this idea come from?A: My observations. You have to perceive the threat: In the case of the ozone hole, you could perceive it because those false-color images of the ozone loss were so easy to understand, and it was personal because few things are scarier than cancer, and a reduced ozone layer leads to too much sun, increasing skin cancers. Science plays a role in communicating what can be readily understood by the public, and that’s important to them perceiving it as a serious problem.Nowadays, we certainly perceive the reality of climate change. We also see that it’s personal. People are dying because of heat waves in much larger numbers than they used to; there are horrible problems in the Boston area, for example, with flooding and sea level rise. People perceive the reality of the problem and they feel personally threatened.The third P is practical: People have to believe that there are practical solutions. It’s interesting to watch how the battle for hearts and minds has shifted. There was a time when the skeptics would just attack the whole idea that the climate was changing. Eventually, they decided ‘we better accept that because people perceive it, so let’s tell them that it’s not caused by human activity.’ But it’s clear enough now that human activity does play a role. So they’ve moved on to attacking that third P, that somehow it’s not practical to have any kind of solutions. This is progress! So what about that third P?What I tried to do in the book is to point out some of the ways in which the problem has also become eminently practical to deal with in the last 10 years, and will continue to move in that direction. We’re right on the cusp of success, and we just have to keep going. People should not give in to eco despair; that’s the worst thing you could do, because then nothing will happen. If we continue to move at the rate we have, we will certainly get to where we need to be.Q: That ties in very nicely with my next question. The book is very optimistic; what gives you hope?A: I’m optimistic because I’ve seen so many examples of where we have succeeded, and because I see so many signs of movement right now that are going to push us in the same direction.If we had kept conducting business as usual as we had been in the year 2000, we’d be looking at 4 degrees of future warming. Right now, I think we’re looking at 3 degrees. I think we can get to 2 degrees. We have to really work on it, and we have to get going seriously in the next decade, but globally right now over 30 percent of our energy is from renewables. That’s fantastic! Let’s just keep going.Q: Throughout the book, you show that environmental problems won’t be solved by individual actions alone, but requires policy and technology driving. What individual actions can people take to help push for those bigger changes?A: A big one is choose to eat more sustainably; choose alternative transportation methods like public transportation or reducing the amount of trips that you make. Older people usually have retirement investments, you can shift them over to a social choice funds and away from index funds that end up funding companies that you might not be interested in. You can use your money to put pressure: Amazon has been under a huge amount of pressure to cut down on their plastic packaging, mainly coming from consumers. They’ve just announced they’re not going to use those plastic pillows anymore. I think you can see lots of ways in which people really do matter, and we can matter more.Q: What do you hope people take away from the book?A: Hope for their future and resolve to do the best they can getting engaged with it. More