More stories

  • in

    Consortium led by MIT, Harvard University, and Mass General Brigham spurs development of 408 MW of renewable energy

    MIT is co-leading an effort to enable the development of two new large-scale renewable energy projects in regions with carbon-intensive electrical grids: Big Elm Solar in Bell County, Texas, came online this year, and the Bowman Wind Project in Bowman County, North Dakota, is expected to be operational in 2026. Together, they will add a combined 408 megawatts (MW) of new renewable energy capacity to the power grid. This work is a critical part of MIT’s strategy to achieve its goal of net-zero carbon emissions by 2026.The Consortium for Climate Solutions, which includes MIT and 10 other Massachusetts organizations, seeks to eliminate close to 1 million metric tons of greenhouse gases each year — more than five times the annual direct emissions from MIT’s campus — by committing to purchase an estimated 1.3-million-megawatt hours of new solar and wind electricity generation annually.“MIT has mobilized on multiple fronts to expedite solutions to climate change,” says Glen Shor, executive vice president and treasurer. “Catalyzing these large-scale renewable projects is an important part of our comprehensive efforts to reduce carbon emissions from generating energy. We are pleased to work in partnership with other local enterprises and organizations to amplify the impact we could achieve individually.”The two new projects complement MIT’s existing 25-year power purchase agreement established with Summit Farms in 2016, which enabled the construction of a roughly 650-acre, 60 MW solar farm on farmland in North Carolina, leading to the early retirement of a coal-fired plant nearby. Its success has inspired other institutions to implement similar aggregation models.A collective approach to enable global impactMIT, Harvard University, and Mass General Brigham formed the consortium in 2020 to provide a structure to accelerate global emissions reductions through the development of large-scale renewable energy projects — accelerating and expanding the impact of each institution’s greenhouse gas reduction initiatives. As the project’s anchors, they collectively procured the largest volume of energy through the aggregation.  The consortium engaged with PowerOptions, a nonprofit energy-buying consortium, which offered its members the opportunity to participate in the projects. The City of Cambridge, Beth Israel Lahey, Boston Children’s Hospital, Dana-Farber Cancer Institute, Tufts University, the Mass Convention Center Authority, the Museum of Fine Arts, and GBH later joined the consortium through PowerOptions.  The consortium vetted over 125 potential projects against its rigorous project evaluation criteria. With faculty and MIT stakeholder input on a short list of the highest-ranking projects, it ultimately chose Bowman Wind and Big Elm Solar. Collectively, these two projects will achieve large greenhouse gas emissions reductions in two of the most carbon-intensive electrical grid regions in the United States and create clean energy generation sources to reduce negative health impacts.“Enabling these projects in regions where the grids are most carbon-intensive allows them to have the greatest impact. We anticipate these projects will prevent two times more emissions per unit of generated electricity than would a similar-scale project in New England,” explains Vice President for Campus Services and Stewardship Joe Higgins.By all consortium institutions making significant 15-to-20-year financial commitments to buy electricity, the developer was able to obtain critical external project financing to build the projects. Owned and operated by Apex Clean Energy, the projects will add new renewable electricity to the grid equivalent to powering 130,000 households annually, displacing over 950,000 metric tons of greenhouse gas emissions each year from highly carbon-intensive power plants in the region. Complementary decarbonization work underway In addition to investing in offsite renewable energy projects, many consortium members have developed strategies to reduce and eliminate their own direct emissions. At MIT, accomplishing this requires transformative change in how energy is generated, distributed, and used on campus. Efforts underway include the installation of solar panels on campus rooftops that will increase renewable energy generation four-fold by 2026; continuing to transition our heat distribution infrastructure from steam-based to hot water-based; utilizing design and construction that minimizes emissions and increases energy efficiency; employing AI-enabled sensors to optimize temperature set points and reduce energy use in buildings; and converting MIT’s vehicle fleet to all-electric vehicles while adding more electric car charging stations.The Institute has also upgraded the Central Utilities Plant, which uses advanced co-generation technology to produce power that is up to 20 percent less carbon-intensive than that from the regional power grid. MIT is charting the course toward a next-generation district energy system, with a comprehensive planning initiative to revolutionize its campus energy infrastructure. The effort is exploring leading-edge technology, including industrial-scale heat pumps, geothermal exchange, micro-reactors, bio-based fuels, and green hydrogen derived from renewable sources as solutions to achieve full decarbonization of campus operations by 2050.“At MIT, we are focused on decarbonizing our own campus as well as the role we can play in solving climate at the largest of scales, including supporting a cleaner grid in line with the call to triple renewables globally by 2030. By enabling these large-scale renewable projects, we can have an immediate and significant impact of reducing emissions through the urgently needed decarbonization of regional power grids,” says Julie Newman, MIT’s director of sustainability.   More

  • in

    Reality check on technologies to remove carbon dioxide from the air

    In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.DAC: The promise and the realityIncluding DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.Challenge 1: Scaling upWhen it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”Challenge 2: Energy requirementGiven the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.Challenge 3: SitingSome analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.Challenge 4: CostConsidering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.Then there’s the cost of storage, which is ignored in many DAC cost estimates.Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.The bottom lineIn their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.” More

  • in

    Turning automotive engines into modular chemical plants to make green fuels

    Reducing methane emissions is a top priority in the fight against climate change because of its propensity to trap heat in the atmosphere: Methane’s warming effects are 84 times more potent than CO2 over a 20-year timescale.And yet, as the main component of natural gas, methane is also a valuable fuel and a precursor to several important chemicals. The main barrier to using methane emissions to create carbon-negative materials is that human sources of methane gas — landfills, farms, and oil and gas wells — are relatively small and spread out across large areas, while traditional chemical processing facilities are huge and centralized. That makes it prohibitively expensive to capture, transport, and convert methane gas into anything useful. As a result, most companies burn or “flare” their methane at the site where it’s emitted, seeing it as a sunk cost and an environmental liability.The MIT spinout Emvolon is taking a new approach to processing methane by repurposing automotive engines to serve as modular, cost-effective chemical plants. The company’s systems can take methane gas and produce liquid fuels like methanol and ammonia on-site; these fuels can then be used or transported in standard truck containers.”We see this as a new way of chemical manufacturing,” Emvolon co-founder and CEO Emmanuel Kasseris SM ’07, PhD ’11 says. “We’re starting with methane because methane is an abundant emission that we can use as a resource. With methane, we can solve two problems at the same time: About 15 percent of global greenhouse gas emissions come from hard-to-abate sectors that need green fuel, like shipping, aviation, heavy heavy-duty trucks, and rail. Then another 15 percent of emissions come from distributed methane emissions like landfills and oil wells.”By using mass-produced engines and eliminating the need to invest in infrastructure like pipelines, the company says it’s making methane conversion economically attractive enough to be adopted at scale. The system can also take green hydrogen produced by intermittent renewables and turn it into ammonia, another fuel that can also be used to decarbonize fertilizers.“In the future, we’re going to need green fuels because you can’t electrify a large ship or plane — you have to use a high-energy-density, low-carbon-footprint, low-cost liquid fuel,” Kasseris says. “The energy resources to produce those green fuels are either distributed, as is the case with methane, or variable, like wind. So, you cannot have a massive plant [producing green fuels] that has its own zip code. You either have to be distributed or variable, and both of those approaches lend themselves to this modular design.”From a “crazy idea” to a companyKasseris first came to MIT to study mechanical engineering as a graduate student in 2004, when he worked in the Sloan Automotive Lab on a report on the future of transportation. For his PhD, he developed a novel technology for improving internal combustion engine fuel efficiency for a consortium of automotive and energy companies, which he then went to work for after graduation.Around 2014, he was approached by Leslie Bromberg ’73, PhD ’77, a serial inventor with more than 100 patents, who has been a principal research engineer in MIT’s Plasma Science and Fusion Center for nearly 50 years.“Leslie had this crazy idea of repurposing an internal combustion engine as a reactor,” Kasseris recalls. “I had looked at that while working in industry, and I liked it, but my company at the time thought the work needed more validation.”Bromberg had done that validation through a U.S. Department of Energy-funded project in which he used a diesel engine to “reform” methane — a high-pressure chemical reaction in which methane is combined with steam and oxygen to produce hydrogen. The work impressed Kasseris enough to bring him back to MIT as a research scientist in 2016.“We worked on that idea in addition to some other projects, and eventually it had reached the point where we decided to license the work from MIT and go full throttle,” Kasseris recalls. “It’s very easy to work with MIT’s Technology Licensing Office when you are an MIT inventor. You can get a low-cost licensing option, and you can do a lot with that, which is important for a new company. Then, once you are ready, you can finalize the license, so MIT was instrumental.”Emvolon continued working with MIT’s research community, sponsoring projects with Professor Emeritus John Heywood and participating in the MIT Venture Mentoring Service and the MIT Industrial Liaison Program.An engine-powered chemical plantAt the core of Emvolon’s system is an off-the-shelf automotive engine that runs “fuel rich” — with a higher ratio of fuel to air than what is needed for complete combustion.“That’s easy to say, but it takes a lot of [intellectual property], and that’s what was developed at MIT,” Kasseris says. “Instead of burning the methane in the gas to carbon dioxide and water, you partially burn it, or partially oxidize it, to carbon monoxide and hydrogen, which are the building blocks to synthesize a variety of chemicals.”The hydrogen and carbon monoxide are intermediate products used to synthesize different chemicals through further reactions. Those processing steps take place right next to the engine, which makes its own power. Each of Emvolon’s standalone systems fits within a 40-foot shipping container and can produce about 8 tons of methanol per day from 300,000 standard cubic feet of methane gas.The company is starting with green methanol because it’s an ideal fuel for hard-to-abate sectors such as shipping and heavy-duty transport, as well as an excellent feedstock for other high-value chemicals, such as sustainable aviation fuel. Many shipping vessels have already converted to run on green methanol in an effort to meet decarbonization goals.This summer, the company also received a grant from the Department of Energy to adapt its process to produce clean liquid fuels from power sources like solar and wind.“We’d like to expand to other chemicals like ammonia, but also other feedstocks, such as biomass and hydrogen from renewable electricity, and we already have promising results in that direction” Kasseris says. “We think we have a good solution for the energy transition and, in the later stages of the transition, for e-manufacturing.”A scalable approachEmvolon has already built a system capable of producing up to six barrels of green methanol a day in its 5,000 square-foot headquarters in Woburn, Massachusetts.“For chemical technologies, people talk about scale up risk, but with an engine, if it works in a single cylinder, we know it will work in a multicylinder engine,” Kasseris says. “It’s just engineering.”Last month, Emvolon announced an agreement with Montauk Renewables to build a commercial-scale demonstration unit next to a Texas landfill that will initially produce up to 15,000 gallons of green methanol a year and later scale up to 2.5 million gallons. That project could be expanded tenfold by scaling across Montauk’s other sites.“Our whole process was designed to be a very realistic approach to the energy transition,” Kasseris says. “Our solution is designed to produce green fuels and chemicals at prices that the markets are willing to pay today, without the need for subsidies. Using the engines as chemical plants, we can get the capital expenditure per unit output close to that of a large plant, but at a modular scale that enables us to be next to low-cost feedstock. Furthermore, our modular systems require small investments — of $1 to 10 million — that are quickly deployed, one at a time, within weeks, as opposed to massive chemical plants that require multiyear capital construction projects and cost hundreds of millions.” More

  • in

    MIT engineers make converting CO2 into useful products more practical

    As the world struggles to reduce greenhouse gas emissions, researchers are seeking practical, economical ways to capture carbon dioxide and convert it into useful products, such as transportation fuels, chemical feedstocks, or even building materials. But so far, such attempts have struggled to reach economic viability.New research by engineers at MIT could lead to rapid improvements in a variety of electrochemical systems that are under development to convert carbon dioxide into a valuable commodity. The team developed a new design for the electrodes used in these systems, which increases the efficiency of the conversion process.The findings are reported today in the journal Nature Communications, in a paper by MIT doctoral student Simon Rufer, professor of mechanical engineering Kripa Varanasi, and three others.“The CO2 problem is a big challenge for our times, and we are using all kinds of levers to solve and address this problem,” Varanasi says. It will be essential to find practical ways of removing the gas, he says, either from sources such as power plant emissions, or straight out of the air or the oceans. But then, once the CO2 has been removed, it has to go somewhere.A wide variety of systems have been developed for converting that captured gas into a useful chemical product, Varanasi says. “It’s not that we can’t do it — we can do it. But the question is how can we make this efficient? How can we make this cost-effective?”In the new study, the team focused on the electrochemical conversion of CO2 to ethylene, a widely used chemical that can be made into a variety of plastics as well as fuels, and which today is made from petroleum. But the approach they developed could also be applied to producing other high-value chemical products as well, including methane, methanol, carbon monoxide, and others, the researchers say.Currently, ethylene sells for about $1,000 per ton, so the goal is to be able to meet or beat that price. The electrochemical process that converts CO2 into ethylene involves a water-based solution and a catalyst material, which come into contact along with an electric current in a device called a gas diffusion electrode.There are two competing characteristics of the gas diffusion electrode materials that affect their performance: They must be good electrical conductors so that the current that drives the process doesn’t get wasted through resistance heating, but they must also be “hydrophobic,” or water repelling, so the water-based electrolyte solution doesn’t leak through and interfere with the reactions taking place at the electrode surface.Unfortunately, it’s a tradeoff. Improving the conductivity reduces the hydrophobicity, and vice versa. Varanasi and his team set out to see if they could find a way around that conflict, and after many months of trying, they did just that.The solution, devised by Rufer and Varanasi, is elegant in its simplicity. They used a plastic material, PTFE (essentially Teflon), that has been known to have good hydrophobic properties. However, PTFE’s lack of conductivity means that electrons must travel through a very thin catalyst layer, leading to significant voltage drop with distance. To overcome this limitation, the researchers wove a series of conductive copper wires through the very thin sheet of the PTFE.“This work really addressed this challenge, as we can now get both conductivity and hydrophobicity,” Varanasi says.Research on potential carbon conversion systems tends to be done on very small, lab-scale samples, typically less than 1-inch (2.5-centimeter) squares. To demonstrate the potential for scaling up, Varanasi’s team produced a sheet 10 times larger in area and demonstrated its effective performance.To get to that point, they had to do some basic tests that had apparently never been done before, running tests under identical conditions but using electrodes of different sizes to analyze the relationship between conductivity and electrode size. They found that conductivity dropped off dramatically with size, which would mean much more energy, and thus cost, would be needed to drive the reaction.“That’s exactly what we would expect, but it was something that nobody had really dedicatedly investigated before,” Rufer says. In addition, the larger sizes produced more unwanted chemical byproducts besides the intended ethylene.Real-world industrial applications would require electrodes that are perhaps 100 times larger than the lab versions, so adding the conductive wires will be necessary for making such systems practical, the researchers say. They also developed a model which captures the spatial variability in voltage and product distribution on electrodes due to ohmic losses. The model along with the experimental data they collected enabled them to calculate the optimal spacing for conductive wires to counteract the drop off in conductivity.In effect, by weaving the wire through the material, the material is divided into smaller subsections determined by the spacing of the wires. “We split it into a bunch of little subsegments, each of which is effectively a smaller electrode,” Rufer says. “And as we’ve seen, small electrodes can work really well.”Because the copper wire is so much more conductive than the PTFE material, it acts as a kind of superhighway for electrons passing through, bridging the areas where they are confined to the substrate and face greater resistance.To demonstrate that their system is robust, the researchers ran a test electrode for 75 hours continuously, with little change in performance. Overall, Rufer says, their system “is the first PTFE-based electrode which has gone beyond the lab scale on the order of 5 centimeters or smaller. It’s the first work that has progressed into a much larger scale and has done so without sacrificing efficiency.”The weaving process for incorporating the wire can be easily integrated into existing manufacturing processes, even in a large-scale roll-to-roll process, he adds.“Our approach is very powerful because it doesn’t have anything to do with the actual catalyst being used,” Rufer says. “You can sew this micrometric copper wire into any gas diffusion electrode you want, independent of catalyst morphology or chemistry. So, this approach can be used to scale anybody’s electrode.”“Given that we will need to process gigatons of CO2 annually to combat the CO2 challenge, we really need to think about solutions that can scale,” Varanasi says. “Starting with this mindset enables us to identify critical bottlenecks and develop innovative approaches that can make a meaningful impact in solving the problem. Our hierarchically conductive electrode is a result of such thinking.”The research team included MIT graduate students Michael Nitzsche and Sanjay Garimella,  as well as Jack Lake PhD ’23. The work was supported by Shell, through the MIT Energy Initiative. More

  • in

    Tackling the energy revolution, one sector at a time

    As a major contributor to global carbon dioxide (CO2) emissions, the transportation sector has immense potential to advance decarbonization. However, a zero-emissions global supply chain requires re-imagining reliance on a heavy-duty trucking industry that emits 810,000 tons of CO2, or 6 percent of the United States’ greenhouse gas emissions, and consumes 29 billion gallons of diesel annually in the U.S. alone.A new study by MIT researchers, presented at the recent American Society of Mechanical Engineers 2024 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, quantifies the impact of a zero-emission truck’s design range on its energy storage requirements and operational revenue. The multivariable model outlined in the paper allows fleet owners and operators to better understand the design choices that impact the economic feasibility of battery-electric and hydrogen fuel cell heavy-duty trucks for commercial application, equipping stakeholders to make informed fleet transition decisions.“The whole issue [of decarbonizing trucking] is like a very big, messy pie. One of the things we can do, from an academic standpoint, is quantify some of those pieces of pie with modeling, based on information and experience we’ve learned from industry stakeholders,” says ZhiYi Liang, PhD student on the renewable hydrogen team at the MIT K. Lisa Yang Global Engineering and Research Center (GEAR) and lead author of the study. Co-authored by Bryony Dupont, visiting scholar at GEAR, and Amos Winter, the Germeshausen Professor in the MIT Department of Mechanical Engineering, the paper elucidates operational and socioeconomic factors that need to be considered in efforts to decarbonize heavy-duty vehicles (HDVs).Operational and infrastructure challengesThe team’s model shows that a technical challenge lies in the amount of energy that needs to be stored on the truck to meet the range and towing performance needs of commercial trucking applications. Due to the high energy density and low cost of diesel, existing diesel drivetrains remain more competitive than alternative lithium battery-electric vehicle (Li-BEV) and hydrogen fuel-cell-electric vehicle (H2 FCEV) drivetrains. Although Li-BEV drivetrains have the highest energy efficiency of all three, they are limited to short-to-medium range routes (under 500 miles) with low freight capacity, due to the weight and volume of the onboard energy storage needed. In addition, the authors note that existing electric grid infrastructure will need significant upgrades to support large-scale deployment of Li-BEV HDVs.While the hydrogen-powered drivetrain has a significant weight advantage that enables higher cargo capacity and routes over 750 miles, the current state of hydrogen fuel networks limits economic viability, especially once operational cost and projected revenue are taken into account. Deployment will most likely require government intervention in the form of incentives and subsidies to reduce the price of hydrogen by more than half, as well as continued investment by corporations to ensure a stable supply. Also, as H2-FCEVs are still a relatively new technology, the ongoing design of conformal onboard hydrogen storage systems — one of which is the subject of Liang’s PhD — is crucial to successful adoption into the HDV market.The current efficiency of diesel systems is a result of technological developments and manufacturing processes established over many decades, a precedent that suggests similar strides can be made with alternative drivetrains. However, interactions with fleet owners, automotive manufacturers, and refueling network providers reveal another major hurdle in the way that each “slice of the pie” is interrelated — issues must be addressed simultaneously because of how they affect each other, from renewable fuel infrastructure to technological readiness and capital cost of new fleets, among other considerations. And first steps into an uncertain future, where no one sector is fully in control of potential outcomes, is inherently risky. “Besides infrastructure limitations, we only have prototypes [of alternative HDVs] for fleet operator use, so the cost of procuring them is high, which means there isn’t demand for automakers to build manufacturing lines up to a scale that would make them economical to produce,” says Liang, describing just one step of a vicious cycle that is difficult to disrupt, especially for industry stakeholders trying to be competitive in a free market. Quantifying a path to feasibility“Folks in the industry know that some kind of energy transition needs to happen, but they may not necessarily know for certain what the most viable path forward is,” says Liang. Although there is no singular avenue to zero emissions, the new model provides a way to further quantify and assess at least one slice of pie to aid decision-making.Other MIT-led efforts aimed at helping industry stakeholders navigate decarbonization include an interactive mapping tool developed by Danika MacDonell, Impact Fellow at the MIT Climate and Sustainability Consortium (MCSC); alongside Florian Allroggen, executive director of MITs Zero Impact Aviation Alliance; and undergraduate researchers Micah Borrero, Helena De Figueiredo Valente, and Brooke Bao. The MCSC’s Geospatial Decision Support Tool supports strategic decision-making for fleet operators by allowing them to visualize regional freight flow densities, costs, emissions, planned and available infrastructure, and relevant regulations and incentives by region.While current limitations reveal the need for joint problem-solving across sectors, the authors believe that stakeholders are motivated and ready to tackle climate problems together. Once-competing businesses already appear to be embracing a culture shift toward collaboration, with the recent agreement between General Motors and Hyundai to explore “future collaboration across key strategic areas,” including clean energy. Liang believes that transitioning the transportation sector to zero emissions is just one part of an “energy revolution” that will require all sectors to work together, because “everything is connected. In order for the whole thing to make sense, we need to consider ourselves part of that pie, and the entire system needs to change,” says Liang. “You can’t make a revolution succeed by yourself.” The authors acknowledge the MIT Climate and Sustainability Consortium for connecting them with industry members in the HDV ecosystem; and the MIT K. Lisa Yang Global Engineering and Research Center and MIT Morningside Academy for Design for financial support. More

  • in

    Startup turns mining waste into critical metals for the U.S.

    At the heart of the energy transition is a metal transition. Wind farms, solar panels, and electric cars require many times more copper, zinc, and nickel than their gas-powered alternatives. They also require more exotic metals with unique properties, known as rare earth elements, which are essential for the magnets that go into things like wind turbines and EV motors.Today, China dominates the processing of rare earth elements, refining around 60 percent of those materials for the world. With demand for such materials forecasted to skyrocket, the Biden administration has said the situation poses national and economic security threats.Substantial quantities of rare earth metals are sitting unused in the United States and many other parts of the world today. The catch is they’re mixed with vast quantities of toxic mining waste.Phoenix Tailings is scaling up a process for harvesting materials, including rare earth metals and nickel, from mining waste. The company uses water and recyclable solvents to collect oxidized metal, then puts the metal into a heated molten salt mixture and applies electricity.The company, co-founded by MIT alumni, says its pilot production facility in Woburn, Massachusetts, is the only site in the world producing rare earth metals without toxic byproducts or carbon emissions. The process does use electricity, but Phoenix Tailings currently offsets that with renewable energy contracts.The company expects to produce more than 3,000 tons of the metals by 2026, which would have represented about 7 percent of total U.S. production last year.Now, with support from the Department of Energy, Phoenix Tailings is expanding the list of metals it can produce and accelerating plans to build a second production facility.For the founding team, including MIT graduates Tomás Villalón ’14 and Michelle Chao ’14 along with Nick Myers and Anthony Balladon, the work has implications for geopolitics and the planet.“Being able to make your own materials domestically means that you’re not at the behest of a foreign monopoly,” Villalón says. “We’re focused on creating critical materials for the next generation of technologies. More broadly, we want to get these materials in ways that are sustainable in the long term.”Tackling a global problemVillalón got interested in chemistry and materials science after taking Course 3.091 (Introduction to Solid-State Chemistry) during his first year at MIT. In his senior year, he got a chance to work at Boston Metal, another MIT spinoff that uses an electrochemical process to decarbonize steelmaking at scale. The experience got Villalón, who majored in materials science and engineering, thinking about creating more sustainable metallurgical processes.But it took a chance meeting with Myers at a 2018 Bible study for Villalón to act on the idea.“We were discussing some of the major problems in the world when we came to the topic of electrification,” Villalón recalls. “It became a discussion about how the U.S. gets its materials and how we should think about electrifying their production. I was finally like, ‘I’ve been working in the space for a decade, let’s go do something about it.’ Nick agreed, but I thought he just wanted to feel good about himself. Then in July, he randomly called me and said, ‘I’ve got [$7,000]. When do we start?’”Villalón brought in Chao, his former MIT classmate and fellow materials science and engineering major, and Myers brought Balladon, a former co-worker, and the founders started experimenting with new processes for producing rare earth metals.“We went back to the base principles, the thermodynamics I learned with MIT professors Antoine Allanore and Donald Sadoway, and understanding the kinetics of reactions,” Villalón says. “Classes like Course 3.022 (Microstructural Evolution in Materials) and 3.07 (Introduction to Ceramics) were also really useful. I touched on every aspect I studied at MIT.”The founders also received guidance from MIT’s Venture Mentoring Service (VMS) and went through the U.S. National Science Foundation’s I-Corps program. Sadoway served as an advisor for the company.After drafting one version of their system design, the founders bought an experimental quantity of mining waste, known as red sludge, and set up a prototype reactor in Villalón’s backyard. The founders ended up with a small amount of product, but they had to scramble to borrow the scientific equipment needed to determine what exactly it was. It turned out to be a small amount of rare earth concentrate along with pure iron.Today, at the company’s refinery in Woburn, Phoenix Tailings puts mining waste rich in rare earth metals into its mixture and heats it to around 1,300 degrees Fahrenheit. When it applies an electric current to the mixture, pure metal collects on an electrode. The process leaves minimal waste behind.“The key for all of this isn’t just the chemistry, but how everything is linked together, because with rare earths, you have to hit really high purities compared to a conventionally produced metal,” Villalón explains. “As a result, you have to be thinking about the purity of your material the entire way through.”From rare earths to nickel, magnesium, and moreVillalón says the process is economical compared to conventional production methods, produces no toxic byproducts, and is completely carbon free when renewable energy sources are used for electricity.The Woburn facility is currently producing several rare earth elements for customers, including neodymium and dysprosium, which are important in magnets. Customers are using the materials for things likewind turbines, electric cars, and defense applications.The company has also received two grants with the U.S. Department of Energy’s ARPA-E program totaling more than $2 million. Its 2023 grant supports the development of a system to extract nickel and magnesium from mining waste through a process that uses carbonization and recycled carbon dioxide. Both nickel and magnesium are critical materials for clean energy applications like batteries.The most recent grant will help the company adapt its process to produce iron from mining waste without emissions or toxic byproducts. Phoenix Tailings says its process is compatible with a wide array of ore types and waste materials, and the company has plenty of material to work with: Mining and processing mineral ores generates about 1.8 billion tons of waste in the U.S. each year.“We want to take our knowledge from processing the rare earth metals and slowly move it into other segments,” Villalón explains. “We simply have to refine some of these materials here. There’s no way we can’t. So, what does that look like from a regulatory perspective? How do we create approaches that are economical and environmentally compliant not just now, but 30 years from now?” More

  • in

    Study: Fusion energy could play a major role in the global response to climate change

    For many decades, fusion has been touted as the ultimate source of abundant, clean electricity. Now, as the world faces the need to reduce carbon emissions to prevent catastrophic climate change, making commercial fusion power a reality takes on new importance. In a power system dominated by low-carbon variable renewable energy sources (VREs) such as solar and wind, “firm” electricity sources are needed to kick in whenever demand exceeds supply — for example, when the sun isn’t shining or the wind isn’t blowing and energy storage systems aren’t up to the task. What is the potential role and value of fusion power plants (FPPs) in such a future electric power system — a system that is not only free of carbon emissions but also capable of meeting the dramatically increased global electricity demand expected in the coming decades?Working together for a year-and-a-half, investigators in the MIT Energy Initiative (MITEI) and the MIT Plasma Science and Fusion Center (PSFC) have been collaborating to answer that question. They found that — depending on its future cost and performance — fusion has the potential to be critically important to decarbonization. Under some conditions, the availability of FPPs could reduce the global cost of decarbonizing by trillions of dollars. More than 25 experts together examined the factors that will impact the deployment of FPPs, including costs, climate policy, operating characteristics, and other factors. They present their findings in a new report funded through MITEI and entitled “The Role of Fusion Energy in a Decarbonized Electricity System.”“Right now, there is great interest in fusion energy in many quarters — from the private sector to government to the general public,” says the study’s principal investigator (PI) Robert C. Armstrong, MITEI’s former director and the Chevron Professor of Chemical Engineering, Emeritus. “In undertaking this study, our goal was to provide a balanced, fact-based, analysis-driven guide to help us all understand the prospects for fusion going forward.” Accordingly, the study takes a multidisciplinary approach that combines economic modeling, electric grid modeling, techno-economic analysis, and more to examine important factors that are likely to shape the future deployment and utilization of fusion energy. The investigators from MITEI provided the energy systems modeling capability, while the PSFC participants provided the fusion expertise.Fusion technologies may be a decade away from commercial deployment, so the detailed technology and costs of future commercial FPPs are not known at this point. As a result, the MIT research team focused on determining what cost levels fusion plants must reach by 2050 to achieve strong market penetration and make a significant contribution to the decarbonization of global electricity supply in the latter half of the century.The value of having FPPs available on an electric grid will depend on what other options are available, so to perform their analyses, the researchers needed estimates of the future cost and performance of those options, including conventional fossil fuel generators, nuclear fission power plants, VRE generators, and energy storage technologies, as well as electricity demand for specific regions of the world. To find the most reliable data, they searched the published literature as well as results of previous MITEI and PSFC analyses.Overall, the analyses showed that — while the technology demands of harnessing fusion energy are formidable — so are the potential economic and environmental payoffs of adding this firm, low-carbon technology to the world’s portfolio of energy options.Perhaps the most remarkable finding is the “societal value” of having commercial FPPs available. “Limiting warming to 1.5 degrees C requires that the world invest in wind, solar, storage, grid infrastructure, and everything else needed to decarbonize the electric power system,” explains Randall Field, executive director of the fusion study and MITEI’s director of research. “The cost of that task can be far lower when FPPs are available as a source of clean, firm electricity.” And the benefit varies depending on the cost of the FPPs. For example, assuming that the cost of building a FPP is $8,000 per kilowatt (kW) in 2050 and falls to $4,300/kW in 2100, the global cost of decarbonizing electric power drops by $3.6 trillion. If the cost of a FPP is $5,600/kW in 2050 and falls to $3,000/kW in 2100, the savings from having the fusion plants available would be $8.7 trillion. (Those calculations are based on differences in global gross domestic product and assume a discount rate of 6 percent. The undiscounted value is about 20 times larger.)The goal of other analyses was to determine the scale of deployment worldwide at selected FPP costs. Again, the results are striking. For a deep decarbonization scenario, the total global share of electricity generation from fusion in 2100 ranges from less than 10 percent if the cost of fusion is high to more than 50 percent if the cost of fusion is low.Other analyses showed that the scale and timing of fusion deployment vary in different parts of the world. Early deployment of fusion can be expected in wealthy nations such as European countries and the United States that have the most aggressive decarbonization policies. But certain other locations — for example, India and the continent of Africa — will have great growth in fusion deployment in the second half of the century due to a large increase in demand for electricity during that time. “In the U.S. and Europe, the amount of demand growth will be low, so it’ll be a matter of switching away from dirty fuels to fusion,” explains Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy and a senior research scientist at MITEI. “But in India and Africa, for example, the tremendous growth in overall electricity demand will be met with significant amounts of fusion along with other low-carbon generation resources in the later part of the century.”A set of analyses focusing on nine subregions of the United States showed that the availability and cost of other low-carbon technologies, as well as how tightly carbon emissions are constrained, have a major impact on how FPPs would be deployed and used. In a decarbonized world, FPPs will have the highest penetration in locations with poor diversity, capacity, and quality of renewable resources, and limits on carbon emissions will have a big impact. For example, the Atlantic and Southeast subregions have low renewable resources. In those subregions, wind can produce only a small fraction of the electricity needed, even with maximum onshore wind buildout. Thus, fusion is needed in those subregions, even when carbon constraints are relatively lenient, and any available FPPs would be running much of the time. In contrast, the Central subregion of the United States has excellent renewable resources, especially wind. Thus, fusion competes in the Central subregion only when limits on carbon emissions are very strict, and FPPs will typically be operated only when the renewables can’t meet demand.An analysis of the power system that serves the New England states provided remarkably detailed results. Using a modeling tool developed at MITEI, the fusion team explored the impact of using different assumptions about not just cost and emissions limits but even such details as potential land-use constraints affecting the use of specific VREs. This approach enabled them to calculate the FPP cost at which fusion units begin to be installed. They were also able to investigate how that “threshold” cost changed with changes in the cap on carbon emissions. The method can even show at what price FPPs begin to replace other specific generating sources. In one set of runs, they determined the cost at which FPPs would begin to displace floating platform offshore wind and rooftop solar.“This study is an important contribution to fusion commercialization because it provides economic targets for the use of fusion in the electricity markets,” notes Dennis G. Whyte, co-PI of the fusion study, former director of the PSFC, and the Hitachi America Professor of Engineering in the Department of Nuclear Science and Engineering. “It better quantifies the technical design challenges for fusion developers with respect to pricing, availability, and flexibility to meet changing demand in the future.”The researchers stress that while fission power plants are included in the analyses, they did not perform a “head-to-head” comparison between fission and fusion, because there are too many unknowns. Fusion and nuclear fission are both firm, low-carbon electricity-generating technologies; but unlike fission, fusion doesn’t use fissile materials as fuels, and it doesn’t generate long-lived nuclear fuel waste that must be managed. As a result, the regulatory requirements for FPPs will be very different from the regulations for today’s fission power plants — but precisely how they will differ is unclear. Likewise, the future public perception and social acceptance of each of these technologies cannot be projected, but could have a major influence on what generation technologies are used to meet future demand.The results of the study convey several messages about the future of fusion. For example, it’s clear that regulation can be a potentially large cost driver. This should motivate fusion companies to minimize their regulatory and environmental footprint with respect to fuels and activated materials. It should also encourage governments to adopt appropriate and effective regulatory policies to maximize their ability to use fusion energy in achieving their decarbonization goals. And for companies developing fusion technologies, the study’s message is clearly stated in the report: “If the cost and performance targets identified in this report can be achieved, our analysis shows that fusion energy can play a major role in meeting future electricity needs and achieving global net-zero carbon goals.” More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More