More stories

  • in

    To boost emissions reductions from electric vehicles, know when to charge

    Transportation-related emissions are increasing globally. Currently, light-duty vehicles — namely passenger cars, such as sedans, SUVs, or minivans — contribute about 20 percent of the net greenhouse gas emissions in the United States. But studies have shown that switching out your conventional gas-guzzling car for a vehicle powered by electricity can make a significant dent in reducing these emissions.
    A recent study published in Environmental Science and Technology takes this a step further by examining how to reduce the emissions associated with the electricity source used to charge an electric vehicle (EV). Taking into account regional charging patterns and the effect of ambient temperature on car fuel economy, researchers at the MIT Energy Initiative (MITEI) find that the time of day when an EV is charged significantly impacts the vehicle’s emissions.
    “If you facilitate charging at particular times, you can really boost the emissions reductions that result from growth in renewables and EVs,” says Ian Miller, the lead author of the study and a research associate at MITEI. “So how do we do this? Time-of-use electricity rates are spreading, and can dramatically shift the time of day when EV drivers charge. If we inform policymakers of these large time-of-charging impacts, they can then design electricity rates to discount charging when our power grids are renewable-heavy. In solar-heavy regions, that’s midday. In wind-heavy regions, like the Midwest, it’s overnight.”
    According to their research, in solar-heavy California, charging an electric vehicle overnight produces 70 percent more emissions than if it were charged midday (when more solar energy powers the grid). Meanwhile, in New York, where nuclear and hydro power constitute a larger share of the electricity mix during the night, the best charging time is the opposite. In this region, charging a vehicle overnight actually reduces emissions by 20 percent relative to daytime charging.
    “Charging infrastructure is another big determinant when it comes to facilitating charging at specific times — during the day especially,” adds Emre Gençer, co-author and a research scientist at MITEI. “If you need to charge your EV midday, then you need to have enough charging stations at your workplace. Today, most people charge their vehicles in their garages overnight, which is going to produce higher emissions in places where it is best to charge during the day.”
    In the study, Miller, Gençer, and Maryam Arbabzadeh, a postdoc at MITEI, make these observations in part by calculating the percentage of error in two common EV emission modeling approaches, which ignore hourly variation in the grid and temperature-driven variation in fuel economy. Their results find that the combined error from these standard methods exceeds 10 percent in 30 percent of the cases, and reaches 50 percent in California, which is home to half of the EVs in the United States.
    “If you don’t model time of charging, and instead assume charging with annual average power, you can mis-estimate EV emissions,” says Arbabzadeh. “To be sure, it’s great to get more solar on the grid and more electric vehicles using that grid. No matter when you charge your EV in the U.S., its emissions will be lower than a similar gasoline-powered car; but if EV charging occurs mainly when the sun is down, you won’t get as much benefit when it comes to reducing emissions as you think when using an annual average.”
    Seeking to lessen this margin of error, the researchers use hourly grid data from 2018 and 2019 — along with hourly charging, driving, and temperature data — to estimate emissions from EV use in 60 cases across the United States. They then introduce and validate a novel method (with less than 1 percent margin of error) to accurately estimate EV emissions. They call it the “average day” method.
    “We found that you can ignore seasonality in grid emissions and fuel economy, and still accurately estimate yearly EV emissions and charging-time impacts,” says Miller. “This was a pleasant surprise. In Kansas last year, daily grid emissions rose about 80 percent between seasons, while EV power demand rose about 50 percent due to temperature changes. Previous studies speculated that ignoring such seasonal swings would hurt accuracy in EV emissions estimates, but never actually quantified the error. We did — across diverse grid mixes and climates — and found the error to be negligible.”
    This finding has useful implications for modeling future EV emissions scenarios. “You can get accuracy without computational complexity,” says Arbabzadeh. “With the average-day method, you can accurately estimate EV emissions and charging impacts in a future year without needing to simulate 8,760 values of grid emissions for each hour of the year. All you need is one average-day profile, which means only 24 hourly values, for grid emissions and other key variables. You don’t need to know seasonal variance from those average-day profiles.”
    The researchers demonstrate the utility of the average-day method by conducting a case study in the southeastern United States from 2018 to 2032 to examine how renewable growth in this region may impact future EV emissions. Assuming a conservative grid projection from the U.S. Energy Information Administration, the results show that EV emissions decline only 16 percent if charging occurs overnight, but more than 50 percent if charging occurs midday. In 2032, compared to a similar hybrid car, EV emissions per mile are 30 percent lower if charged overnight, and 65 percent lower if charged midday.
    The model used in this study is one module in a larger modeling program called the Sustainable Energy Systems Analysis Modeling Environment (SESAME). This tool, developed at MITEI, takes a systems-level approach to assess the complete carbon footprint of today’s evolving global energy system.
    “The idea behind SESAME is to make better decisions for decarbonization and to understand the energy transition from a systems perspective,” says Gençer. “One of the key elements of SESAME is how you can connect different sectors together — ‘sector coupling’ — and in this study, we are seeing a very interesting example from the transportation and electric power sectors. Right now, as we’ve been claiming, it’s impossible to treat these two sector systems independently, and this is a clear demonstration of why MITEI’s new modeling approach is really important, as well as how we can tackle some of these impending issues.”
    In ongoing and future research, the team is expanding their charging analysis from individual vehicles to whole fleets of passenger cars in order to develop fleet-level decarbonization strategies. Their work seeks to answer questions such as how California’s proposed ban of gasoline car sales in 2035 would impact transportation emissions. They are also exploring what fleet electrification could mean — not only for greenhouse gases, but also the demand for natural resources such as cobalt — and whether EV batteries could provide significant grid energy storage.
    “To mitigate climate change, we need to decarbonize both the transportation and electric power sectors,” says Gençer. “We can electrify transportation, and it will significantly reduce emissions, but what this paper shows is how you can do it more effectively.”
    This research was sponsored by ExxonMobil Research and Engineering through the MIT Energy Initiative Low-Carbon Energy Centers. More

  • in

    Want cheaper nuclear energy? Turn the design process into a game

    Nuclear energy provides more carbon-free electricity in the United States than solar and wind combined, making it a key player in the fight against climate change. But the U.S. nuclear fleet is aging, and operators are under pressure to streamline their operations to compete with coal- and gas-fired plants.
    One of the key places to cut costs is deep in the reactor core, where energy is produced. If the fuel rods that drive reactions there are ideally placed, they burn less fuel and require less maintenance. Through decades of trial and error, nuclear engineers have learned to design better layouts to extend the life of pricey fuel rods. Now, artificial intelligence is poised to give them a boost.
    Researchers at MIT and Exelon show that by turning the design process into a game, an AI system can be trained to generate dozens of optimal configurations that can make each rod last about 5 percent longer, saving a typical power plant an estimated $3 million a year, the researchers report. The AI system can also find optimal solutions faster than a human, and quickly modify designs in a safe, simulated environment. Their results appear this month in the journal Nuclear Engineering and Design.
    “This technology can be applied to any nuclear reactor in the world,” says the study’s senior author, Koroush Shirvan, an assistant professor in MIT’s Department of Nuclear Science and Engineering. “By improving the economics of nuclear energy, which supplies 20 percent of the electricity generated in the U.S., we can help limit the growth of global carbon emissions and attract the best young talents to this important clean-energy sector.”
    In a typical reactor, fuel rods are lined up on a grid, or assembly, by their levels of uranium and gadolinium oxide within, like chess pieces on a board, with radioactive uranium driving reactions, and rare-earth gadolinium slowing them down. In an ideal layout, these competing impulses balance out to drive efficient reactions. Engineers have tried using traditional algorithms to improve on human-devised layouts, but in a standard 100-rod assembly there might be an astronomical number of options to evaluate. So far, they’ve had limited success.
    The researchers wondered if deep reinforcement learning, an AI technique that has achieved superhuman mastery at games like chess and Go, could make the screening process go faster. Deep reinforcement learning combines deep neural networks, which excel at picking out patterns in reams of data, with reinforcement learning, which ties learning to a reward signal like winning a game, as in Go, or reaching a high score, as in Super Mario Bros.
    Here, the researchers trained their agent to position the fuel rods under a set of constraints, earning more points with each favorable move. Each constraint, or rule, picked by the researchers reflects decades of expert knowledge rooted in the laws of physics. The agent might score points, for example, by positioning low-uranium rods on the edges of the assembly, to slow reactions there; by spreading out the gadolinium “poison” rods to maintain consistent burn levels; and by limiting the number of poison rods to between 16 and 18.
    “After you wire in rules, the neural networks start to take very good actions,” says the study’s lead author Majdi Radaideh, a postdoc in Shirvan’s lab. “They’re not wasting time on random processes. It was fun to watch them learn to play the game like a human would.”
    Through reinforcement learning, AI has learned to play increasingly complex games as well as or better than humans. But its capabilities remain relatively untested in the real world. Here, the researchers show that reinforcement learning has potentially powerful applications.
    “This study is an exciting example of transferring an AI technique for playing board games and video games to helping us solve practical problems in the world,” says study co-author Joshua Joseph, a research scientist at the MIT Quest for Intelligence.
    Exelon is now testing a beta version of the AI system in a virtual environment that mimics an assembly within a boiling water reactor, and about 200 assemblies within a pressurized water reactor, which is globally the most common type of reactor. Based in Chicago, Illinois, Exelon owns and operates 21 nuclear reactors across the United States. It could be ready to implement the system in a year or two, a company spokesperson says.
    The study’s other authors are Isaac Wolverton, a MIT senior who joined the project through the Undergraduate Research Opportunities Program; Nicholas Roy and Benoit Forget of MIT; and James Tusar and Ugi Otgonbaatar of Exelon. More

  • in

    Fikile Brushett is looking for new ways to store energy

    Fikile Brushett, an MIT associate professor of chemical engineering, had an unusual source of inspiration for his career in the chemical sciences: the character played by Nicolas Cage in the 1996 movie “The Rock.” In the film, Cage portrays an FBI chemist who hunts down a group of rogue U.S. soldiers who have commandeered chemical weapons and taken over the island of Alcatraz.
    “For a really long time, I really wanted to be a chemist and work for the FBI with chemical warfare agents. That was the goal: to be Nick Cage,” recalls Brushett, who first saw the movie as a high school student living in Silver Spring, Maryland, a suburb of Washington.
    Though he did not end up joining the FBI or working with chemical weapons — which he says is probably for the best — Brushett did pursue his love of chemistry. In his lab at MIT, Brushett leads a group dedicated to developing more efficient and sustainable ways to store energy, including batteries that could be used to store the electricity generated by wind and solar power. He is also exploring new ways to convert carbon dioxide to useful fuels.
    “The backbone of our global energy economy is based upon liquid fossil fuels right now, and energy demand is increasing,” he says. “The challenge we’re facing is that carbon emissions are tied very tightly to this increasing energy demand, and carbon emissions are linked to climate volatility, as well as pollution and health effects. To me, this is an incredibly urgent, important, and inspiring problem to go after.”
    “A body of knowledge”
    Brushett’s parents immigrated to the United States in the early 1980s, before he was born. His mother, an English as a second language teacher, is from South Africa, and his father, an economist, is from the United Kingdom. Brushett grew up mostly in the Washington area, with the exception of four years spent living in Zimbabwe, due to his father’s work at the World Bank.
    Brushett remembers this as an idyllic time, saying, “School ended at 1 p.m., so you almost had the whole afternoon to do sports at school, or you could go home and just play in the garden.”
    His family returned to the Washington area while he was in sixth grade, and in high school, he started to get interested in chemistry, as well as other scientific subjects and math.
    At the University of Pennsylvania, he decided to major in chemical engineering because someone had advised him that if he liked chemistry and math, chemical engineering would be a good fit. While he enjoyed some of his chemical engineering classes, he struggled with others at first.
    “I remember really having a hard time with chemE for a while, and I was fortunate enough to have a really good academic advisor who said, ‘Listen, chemE is hard for some people. Some people get it immediately, for some people it takes a little while for it to sink in,’” he says. Around his junior year, concepts started to fall into place, he recalls. “Rather than looking at courses as self-contained units, the units started coming together and flowing into a body of knowledge. I was able to see the interconnections between courses.”
    While he was originally most interested in molecular biotechnology — the field of engineering proteins and other biological molecules — he ended up working in a reaction engineering lab with his academic advisor, John Vohs. There, he studied how catalytic surfaces influence chemical reactions. At Vohs’ recommendation, he applied to the University of Illinois at Urbana-Champaign for graduate school, where he worked on electrochemistry projects. With his PhD advisor, Paul Kenis, he developed microfluidic fuel cells that could run on a variety of different fuels as portable power sources.
    During his third year of graduate school, he began applying for faculty positions and was offered a job at MIT, which he accepted but deferred for two years so he could do a postdoc at Argonne National Laboratory. There, he worked with scientists and engineers doing a wide range of research on electrochemical energy storage, and became interested in flow batteries, which is now one of the major focus areas of his lab at MIT.
    Modeling new technology
    Unlike the rechargeable lithium-ion batteries that power our cell phones and laptops, flow batteries use large tanks of liquid to store energy. Such batteries have traditionally been prohibitively expensive because they rely on pricey electroactive metal salts. Brushett is working on alternative approaches that use less expensive electroactive materials derived from organic compounds.
    Such batteries could be used to store the power intermittently produced by wind turbines and solar panels, making them a more reliable, efficient, and cost-effective source of energy. His lab also works on new processes for converting carbon dioxide, a waste product and greenhouse gas, into useful fuels.
    In a related area of research, Brushett’s lab performs “techno-economic” modeling of potential new technologies, to help them assess what aspects of the technology need the most improvement to make them economically feasible.
    “With techno-economic modeling, we can devise targets for basic science,” he says. “We’re always looking for the rate-limiting step. What is it that’s preventing us from moving forward? In some cases it could be a catalyst, in other cases it could be a membrane. In other cases it could be the architecture for the device.”
    Once those targets are identified, researchers working in those areas have a better idea of what they need to focus on to make a particular technology work, Brushett says.
    “That’s the thing I’ve been most proud of from our research — hopefully opening up or demystifying the field and allowing a more diverse set of researchers to enter and to add value, which I think is important in terms of growing the science and developing new ideas,” he says. More

  • in

    Researchers decipher structure of promising battery materials

    A class of materials called metal organic frameworks, or MOFs, has attracted considerable interest over the last several years for a variety of potential energy-related applications — especially since researchers discovered that these typically insulating materials could also be made electrically conductive.
    Thanks to MOFs’ extraordinary combination of porosity and conductivity, this finding opened the possibility of new applications in batteries, fuel cells, supercapacitors, electrocatalysts, and specialized chemical sensors. But the process of developing specific MOF materials that possess the desired characteristics has been slow. That’s largely because it’s been hard to figure out their exact molecular structure and how it influences the material’s properties.
    Now, researchers at MIT and other institutions have found a way to control the growth of crystals of several kinds of MOFs. This made it possible to produce crystals large enough to be probed by a battery of tests, enabling the team to finally decode the structure of these materials, which resemble the two-dimensional hexagonal lattices of materials like graphene.
    The findings are described today in the journal Nature Materials, in a paper by a team of 20 at MIT and other universities in the U.S., China, and Sweden, led by W. M. Keck Professor of Energy Mircea Dincă from MIT’s Department of Chemistry.
    Since conductive MOFs were first discovered a few years ago, Dincă says, many teams have been working to develop versions for many different applications, “but nobody had been able to get a structure of the material with so much detail.” The better the details of those structures are understood, he says, “it helps you design better materials, and much faster. And that’s what we’ve done here: We provided the first detailed crystal structure at atomic resolution.”
    The difficulty in growing crystals that were large enough for such studies, he says, lies in the chemical bonds within the MOFs. These materials consist of a lattice of metal atoms and organic molecules that tend to form into crooked needle- or thread-like crystals, because the chemical bonds that connect the atoms in the plane of their hexagonal lattice are harder to form and harder to break. In contrast, the bonds in the vertical direction are much weaker and so keep breaking and reforming at a faster rate, causing the structures to rise faster than they can spread out. The resulting spindly crystals were far too small to be characterized by most available tools.
    The team solved that problem by changing the molecular structure of one of the organic compounds in the MOF so that it changed the balance of electron density and the way it interacts with the metal. This reversed the imbalance in the bond strengths and growth rates, thus allowing much larger crystal sheets to form. These larger crystals were then analyzed using a battery of high-resolution diffraction-based imaging techniques.
    As was the case with graphene, finding ways to produce larger sheets of the material could be a key to unlocking the potential of this type of MOFs, Dincă says. Initially graphene could only be produced by using sticky tape to peel off single-atom-thick layers from a block of graphite, but over time methods have been developed to directly produce sheets large enough to be useful. The hope is that the techniques developed in this study could help pave the way to similar advances for MOFs, Dincă says.
    “This is basically providing a basis and a blueprint for making large crystals of two-dimensional MOFs,” he says.
    As with graphene, but unlike most other conductive materials, the conductive MOFs have a strong directionality to their electrical conductivity: They conduct much more freely along the plane of the sheet of material than in the perpendicular direction.
    This property, combined with the material’s very high porosity, could make it a strong candidate to be used as an electrode material for batteries, fuel cells, or supercapacitors. And when its organic components have certain groups of atoms attached to them that bond to particular other compounds, they could be used as very sensitive chemical detectors.
    Graphene and the handful of other 2D materials known have opened up a wide swath of research in potential applications in electronics and other fields, but those materials have essentially fixed properties. Because MOFs share many of those materials’ characteristics, but form a broad family of possible variations with varying properties, they should allow researchers to design the specific kinds of materials needed for a particular use, Dincă says.
    For fuel cells, for example, “you want something that has a lot of active sites” for reactivity on the large surface area provided by the structure with its open latticework, he says. Or for a sensor to monitor levels of a particular gas such as carbon dioxide, “you want something that is specific and doesn’t give false positives.” These kinds of properties can be engineered in through the selection of the organic compounds used to make the MOFs, he says.
    The team included researchers from MIT’s departments of Chemistry, Biology, and Electrical Engineering and Computer Science; Peking University and the Shanghai Advanced Research University in China; Stockholm University in Sweden; the University of Oregon; and Purdue University. The work was supported by the U.S. Army Research Office. More

  • in

    Powering through the coming energy transition

    Aiming to avoid the worst effects of climate change, from severe droughts to extreme coastal flooding, the nearly 200 nations that signed the 2015 Paris Agreement set a long-term goal of keeping global warming well below 2 degrees Celsius. Achieving that goal will require dramatic reductions in greenhouse gas emissions, primarily through a global transition to low-carbon energy technologies. In the power sector, these include solar, wind, biomass, nuclear, and carbon capture and storage (CCS). According to more than half of the models cited in the Intergovernmental Panel on Climate Change’s (IPCC) Fifth Assessment Report, CCS will be required to realize the Paris goal, but to what extent will it need to be deployed to ensure that outcome?
    A new study in Climate Change Economics, led by the MIT Joint Program on the Science and Policy of Global Change, projects the likely role of CCS in the power sector in a portfolio of low-carbon technologies. Using the Joint Program’s multi-region, multi-sector energy-economic modeling framework to quantify the economic and technological competition among low-carbon technologies as well as the impact of technology transfers between countries, the study assessed the potential of CCS and its competitors in mitigating carbon emissions in the power sector under a policy scenario aligned with the 2 C Paris goal.
    The researchers found that under this scenario and the model’s baseline estimates of technology costs and performance, CCS will likely be incorporated in nearly 40 percent of global electricity production by 2100 — one-third in coal-fired power plants, and two-thirds in those run on natural gas.
    “Our projections show that CCS can play a major role in the second half of this century in mitigating carbon emissions in the power sector,” says Jennifer Morris, an MIT Joint Program research scientist and the lead author of the study. “But in order for CCS to be well-positioned to provide stable and reliable power during that time frame, research and development will need to be scaled up.”
    That would require a considerable expansion of today’s nearly four-dozen commercial-scale carbon capture projects around the globe, about half of which are in development.
    The study also found that the extent of CCS deployment, especially coal CCS, depends on the assumed fraction of carbon captured in CCS power plants. Under a stringent climate policy with high carbon prices, the penalty on uncaptured emissions can make CCS technologies uneconomical and hinder their expansion. Adding options for higher capture rates or offsetting uncaptured emissions (e.g., by co-firing with biomass, which has already captured carbon through its cultivation and so would produce net negative emissions when combusted) can lead to greater deployment of CCS.
    According to the study, CCS deployment will likely vary on a regional basis, with the United States and Europe depending primarily on gas CCS, China on coal CCS, and India embracing both options. Comparing projections of demands for CCS to an assessment of the planet’s capacity to store CO2, the authors found that CO2 storage potential is larger than storage demand at both global and regional scales.
    Finally, in evaluating the comparative costs of competing low-carbon technologies, the study found that nuclear generation, if public acceptance and economic issues are resolved, could substitute for CCS in providing clean dispatchable power. Renewables could also outcompete CCS, depending on how the costs of intermittency (i.e., systems that keep the lights on when the sun doesn’t shine or the wind doesn’t blow) are defined. Progress in resolving technical and economic challenges related to intermittency could reduce the need for accelerated CCS deployment.
    Ultimately, the authors determined that the power sector will continue to rely on a mix of technological options, and the conditions that favor a particular mix of technologies differ by region.
    “This suggests that policymakers should not pick a winner, but rather create an environment where all technologies compete on an economic basis,” says Sergey Paltsev, deputy director of the MIT Joint Program and a co-author of the study. “CCS has great potential to be a competitive option, and that potential can increase with additional research and development related to capture rates, CO2 transport and storage, and applications of CCS technologies to areas outside of power generation.”
    To that end, MIT Joint Program researchers are pursuing an in-depth analysis of the options and costs for the transportation and long-term storage of CO2 emissions captured by CCS technology. They are also assessing the potential of CCS in hard-to-abate economic sectors such cement, iron and steel, and fertilizer production.
    This research was supported by sponsors of the MIT Joint Program and by ExxonMobil through its membership in the MIT Energy Initiative. More