More stories

  • in

    How to make small modular reactors more cost-effective

    When Youyeon Choi was in high school, she discovered she really liked “thinking in geometry.” The shapes, the dimensions … she was into all of it. Today, geometry plays a prominent role in her doctoral work under the guidance of Professor Koroush Shirvan, as she explores ways to increase the competitiveness of small modular reactors (SMRs).Central to the thesis is metallic nuclear fuel in a helical cruciform shape, which improves surface area and lowers heat flux as compared to the traditional cylindrical equivalent.A childhood in a prominent nuclear energy countryHer passion for geometry notwithstanding, Choi admits she was not “really into studying” in middle school. But that changed when she started excelling in technical subjects in her high school years. And because it was the natural sciences that first caught Choi’s eye, she assumed she would major in the subject when she went to university.This focus, too, would change. Growing up in Seoul, Choi was becoming increasingly aware of the critical role nuclear energy played in meeting her native country’s energy needs. Twenty-six reactors provide nearly a third of South Korea’s electricity, according to the World Nuclear Association. The country is also one of the world’s most prominent nuclear energy entities.In such an ecosystem, Choi understood the stakes at play, especially with electricity-guzzling technologies such as AI and electric vehicles on the rise. Her father also discussed energy-related topics with Choi when she was in high school. Being soaked in that atmosphere eventually led Choi to nuclear engineering.

    Youyeon Choi: Making small modular reactors more cost-effective

    Early work in South KoreaExcelling in high school math and science, Choi was a shoo-in for college at Seoul National University. Initially intent on studying nuclear fusion, Choi switched to fission because she saw that the path to fusion was more convoluted and was still in the early stages of exploration.Choi went on to complete her bachelor’s and master’s degrees in nuclear engineering from the university. As part of her master’s thesis, she worked on a multi-physics modeling project involving high-fidelity simulations of reactor physics and thermal hydraulics to analyze reactor cores.South Korea exports its nuclear know-how widely, so work in the field can be immensely rewarding. Indeed, after graduate school, Choi moved to Daejeon, which has the moniker “Science City.” As an intern at the Korea Atomic Energy Research Institute (KAERI), she conducted experimental studies on the passive safety systems of nuclear reactors. Choi then moved to the Korea Institute of Nuclear Nonproliferation and Control, where she worked as a researcher developing nuclear security programs for countries. Given South Korea’s dominance in the field, other countries would tap its knowledge resource to tap their own nuclear energy programs. The focus was on international training programs, an arm of which involved cybersecurity and physical protection.While the work was impactful, Choi found she missed the modeling work she did as part of her master’s thesis. Looking to return to technical research, she applied to the MIT Department of Nuclear Science and Engineering (NSE). “MIT has the best nuclear engineering program in the States, and maybe even the world,” Choi says, explaining her decision to enroll as a doctoral student.Innovative research at MITAt NSE, Choi is working to make SMRs more price competitive as compared to traditional nuclear energy power plants.Due to their smaller size, SMRs are able to serve areas where larger reactors might not work, but they’re more expensive. One way to address costs is to squeeze more electricity out of a unit of fuel — to increase the power density. Choi is doing so by replacing the traditional uranium dioxide ceramic fuel in a cylindrical shape with a metal one in a helical cruciform. Such a replacement potentially offers twin advantages: the metal fuel has high conductivity, which means the fuel will operate even more safely at lower temperatures. And the twisted shape gives more surface area and lower heat flux. The net result is more electricity for the same volume.The project receives funding from a collaboration between Lightbridge Corp., which is exploring how advanced fuel technologies can improve the performance of water-cooled SMRs, and the U.S. Department of Energy Nuclear Energy University Program.With SMR efficiencies in mind, Choi is indulging her love of multi-physics modeling, and focusing on reactor physics, thermal hydraulics, and fuel performance simulation. “The goal of this modeling and simulation is to see if we can really use this fuel in the SMR,” Choi says. “I’m really enjoying doing the simulations because the geometry is really hard to model. Because the shape is twisted, there’s no symmetry at all,” she says. Always up for a challenge, Choi learned the various aspects of physics and a variety of computational tools, including the Monte Carlo code for reactor physics.Being at MIT has a whole roster of advantages, Choi says, and she especially appreciates the respect researchers have for each other. She appreciates being able to discuss projects with Shirvan and his focus on practical applications of research. At the same time, Choi appreciates the “exotic” nature of her project. “Even assessing if this SMR fuel is at all feasible is really hard, but I think it’s all possible because it’s MIT and my PI [principal investigator] is really invested in innovation,” she says.It’s an exciting time to be in nuclear engineering, Choi says. She serves as one of the board members of the student section of the American Nuclear Society and is an NSE representative of the Graduate Student Council for the 2024-25 academic year.Choi is excited about the global momentum toward nuclear as more countries are exploring the energy source and trying to build more nuclear power plants on the path to decarbonization. “I really do believe nuclear energy is going to be a leading carbon-free energy. It’s very important for our collective futures,” Choi says. More

  • in

    The role of modeling in the energy transition

    Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what’s going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”Dialogue, not forecastsEIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”Modeling uncertaintyThe key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”The future of energy modelingLooking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”  More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More

  • in

    Advancing urban tree monitoring with AI-powered digital twins

    The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.A tree grows in Brooklyn (and many other places)The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.It’s a breezeWhile Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month.  More

  • in

    Reality check on technologies to remove carbon dioxide from the air

    In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.DAC: The promise and the realityIncluding DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.Challenge 1: Scaling upWhen it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”Challenge 2: Energy requirementGiven the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.Challenge 3: SitingSome analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.Challenge 4: CostConsidering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.Then there’s the cost of storage, which is ignored in many DAC cost estimates.Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.The bottom lineIn their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.” More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More

  • in

    Aligning economic and regulatory frameworks for today’s nuclear reactor technology

    Liam Hines ’22 didn’t move to Sarasota, Florida, until high school, but he’s a Floridian through and through. He jokes that he’s even got a floral shirt, what he calls a “Florida formal,” for every occasion.Which is why it broke his heart when toxic red algae used to devastate the Sunshine State’s coastline, including at his favorite beach, Caspersen. The outbreak made headline news during his high school years, with the blooms destroying marine wildlife and adversely impacting the state’s tourism-driven economy.In Florida, Hines says, environmental awareness is pretty high because everyday citizens are being directly impacted by climate change. After all, it’s hard not to worry when beautiful white sand beaches are covered in dead fish. Ongoing concerns about the climate cemented Hines’ resolve to pick a career that would have a strong “positive environmental impact.” He chose nuclear, as he saw it as “a green, low-carbon-emissions energy source with a pretty straightforward path to implementation.”

    Liam Hines: Ensuring that nuclear policy keeps up with nuclear technology.

    Undergraduate studies at MITKnowing he wanted a career in the sciences, Hines applied and got accepted to MIT for undergraduate studies in fall 2018. An orientation program hosted by the Department of Nuclear Science and Engineering (NSE) sold him on the idea of pursuing the field. “The department is just a really tight-knit community, and that really appealed to me,” Hines says.During his undergraduate years, Hines realized he needed a job to pay part of his bills. “Instead of answering calls at the dorm front desk or working in the dining halls, I decided I’m going to become a licensed nuclear operator onsite,” he says. “Reactor operations offer so much hands-on experience with real nuclear systems. It doesn’t hurt that it pays better.” Becoming a licensed nuclear reactor operator is hard work, however, involving a year-long training process studying maintenance, operations, and equipment oversight. A bonus: The job, supervising the MIT Nuclear Reactor Laboratory, taught him the fundamentals of nuclear physics and engineering.Always interested in research, Hines got an early start by exploring the regulatory challenges of advanced fusion systems. There have been questions related to licensing requirements and the safety consequences of the onsite radionuclide inventory. Hines’ undergraduate research work involved studying precedent for such fusion facilities and comparing them to experimental facilities such as the Tokamak Fusion Test Reactor at the Princeton Plasma Physics Laboratory.Doctoral focus on legal and regulatory frameworksWhen scientists want to make technologies as safe as possible, they have to do two things in concert: First they evaluate the safety of the technology, and then make sure legal and regulatory structures take into account the evolution of these advanced technologies. Hines is taking such a two-pronged approach to his doctoral work on nuclear fission systems.Under the guidance of Professor Koroush Shirvan, Hines is conducting systems modeling of various reactor cores that include graphite, and simulating operations under long time spans. He then studies radionuclide transport from low-level waste facilities — the consequences of offsite storage after 50 or 100 or even 10,000 years of storage. The work has to make sure to hit safety and engineering margins, but also tread a fine line. “You want to make sure you’re not over-engineering systems and adding undue cost, but also making sure to assess the unique hazards of these advanced technologies as accurately as possible,” Hines says.On a parallel track, under Professor Haruko Wainwright’s advisement, Hines is applying the current science on radionuclide geochemistry to track radionuclide wastes and map their profile for hazards. One of the challenges fission reactors face is that existing low-level waste regulations were fine-tuned to old reactors. Regulations have not kept up: “Now that we have new technologies with new wastes, some of the hazards of the new waste are completely missed by existing standards,” Hines says. He is working to seal these gaps.A philosophy-driven outlookHines is grateful for the dynamic learning environment at NSE. “A lot of the faculty have that go-getter attitude,” he points out, impressed by the entrepreneurial spirit on campus. “It’s made me confident to really tackle the things that I care about.”An ethics class as an undergraduate made Hines realize there were discussions in class he could apply to the nuclear realm, especially when it came to teasing apart the implications of the technology — where the devices would be built and who they would serve. He eventually went on to double-major in NSE and philosophy.The framework style of reading and reasoning involved in studying philosophy is particularly relevant in his current line of work, where he has to extract key points regarding nuclear regulatory issues. Much like philosophy discussions today that involve going over material that has been discussed for centuries and framing them through new perspectives, nuclear regulatory issues too need to take the long view.“In philosophy, we have to insert ourselves into very large conversations. Similarly, in nuclear engineering, you have to understand how to take apart the discourse that’s most relevant to your research and frame it,” Hines says. This technique is especially necessary because most of the time the nuclear regulatory issues might seem like wading in the weeds of nitty-gritty technical matters, but they can have a huge impact on the public and public perception, Hines adds.As for Florida, Hines visits every chance he can get. The red tide still surfaces but not as consistently as it once did. And since he started his job as a nuclear operator in his undergraduate days, Hines has progressed to senior reactor operator. This time around he gets to sign off on the checklists. “It’s much like when I was shift lead at Dunkin’ Donuts in high school,” Hines says, “everyone is kind of doing the same thing, but you get to be in charge for the afternoon.” More

  • in

    3 Questions: The past, present, and future of sustainability science

    It was 1978, over a decade before the word “sustainable” would infiltrate environmental nomenclature, and Ronald Prinn, MIT professor of atmospheric science, had just founded the Advanced Global Atmospheric Gases Experiment (AGAGE). Today, AGAGE provides real-time measurements for well over 50 environmentally harmful trace gases, enabling us to determine emissions at the country level, a key element in verifying national adherence to the Montreal Protocol and the Paris Accord. This, Prinn says, started him thinking about doing science that informed decision making.Much like global interest in sustainability, Prinn’s interest and involvement continued to grow into what would become three decades worth of achievements in sustainability science. The Center for Global Change Science (CGCS) and Joint Program on the Science and Policy Global Change, respectively founded and co-founded by Prinn, have recently joined forces to create the MIT School of Science’s new Center for Sustainability Science and Strategy (CS3), lead by former CGCS postdoc turned MIT professor, Noelle Selin.As he prepares to pass the torch, Prinn reflects on how far sustainability has come, and where it all began.Q: Tell us about the motivation for the MIT centers you helped to found around sustainability.A: In 1990 after I founded the Center for Global Change Science, I also co-founded the Joint Program on the Science and Policy Global Change with a very important partner, [Henry] “Jake” Jacoby. He’s now retired, but at that point he was a professor in the MIT Sloan School of Management. Together, we determined that in order to answer questions related to what we now call sustainability of human activities, you need to combine the natural and social sciences involved in these processes. Based on this, we decided to make a joint program between the CGCS and a center that he directed, the Center for Energy and Environmental Policy Research (CEEPR).It was called the “joint program” and was joint for two reasons — not only were two centers joining, but two disciplines were joining. It was not about simply doing the same science. It was about bringing a team of people together that could tackle these coupled issues of environment, human development and economy. We were the first group in the world to fully integrate these elements together.Q: What has been your most impactful contribution and what effect did it have on the greater public’s overall understanding?A: Our biggest contribution is the development, and more importantly, the application of the Integrated Global System Model [IGSM] framework, looking at human development in both developing countries and developed countries that had a significant impact on the way people thought about climate issues. With IGSM, we were able to look at the interactions among human and natural components, studying the feedbacks and impacts that climate change had on human systems; like how it would alter agriculture and other land activities, how it would alter things we derive from the ocean, and so on.Policies were being developed largely by economists or climate scientists working independently, and we started showing how the real answers and analysis required a coupling of all of these components. We showed, and I think convincingly, that what people used to study independently, must be coupled together, because the impacts of climate change and air pollution affected so many things.To address the value of policy, despite the uncertainty in climate projections, we ran multiple runs of the IGSM with and without policy, with different choices for uncertain IGSM variables. For public communication, around 2005, we introduced our signature Greenhouse Gamble interactive visualization tools; these have been renewed over time as science and policies evolved.Q: What can MIT provide now at this critical juncture in understanding climate change and its impact?A: We need to further push the boundaries of integrated global system modeling to ensure full sustainability of human activity and all of its beneficial dimensions, which is the exciting focus that the CS3 is designed to address. We need to focus on sustainability as a central core element and use it to not just analyze existing policies but to propose new ones. Sustainability is not just climate or air pollution, it’s got to do with human impacts in general. Human health is central to sustainability, and equally important to equity. We need to expand the capability for credibly assessing what the impact policies have not just on developed countries, but on developing countries, taking into account that many places around the world are at artisanal levels of their economies. They cannot be blamed for anything that is changing climate and causing air pollution and other detrimental things that are currently going on. They need our help. That’s what sustainability is in its full dimensions.Our capabilities are evolving toward a modeling system so detailed that we can find out detrimental things about policies even at local levels before investing in changing infrastructure. This is going to require collaboration among even more disciplines and creating a seamless connection between research and decision making; not just for policies enacted in the public sector, but also for decisions that are made in the private sector.  More