More stories

  • in

    Pricing carbon, valuing people

    In November, inflation hit a 39-year high in the United States. The consumer price index was up 6.8 percent from the previous year due to major increases in the cost of rent, food, motor vehicles, gasoline, and other common household expenses. While inflation impacts the entire country, its effects are not felt equally. At greatest risk are low- and middle-income Americans who may lack sufficient financial reserves to absorb such economic shocks.

    Meanwhile, scientists, economists, and activists across the political spectrum continue to advocate for another potential systemic economic change that many fear will also put lower-income Americans at risk: the imposition of a national carbon price, fee, or tax. Framed by proponents as the most efficient and cost-effective way to reduce greenhouse gas emissions and meet climate targets, a carbon penalty would incentivize producers and consumers to shift expenditures away from carbon-intensive products and services (e.g., coal or natural gas-generated electricity) and toward low-carbon alternatives (e.g., 100 percent renewable electricity). But if not implemented in a way that takes differences in household income into account, this policy strategy, like inflation, could place an unequal and untenable economic burden on low- and middle-income Americans.         

    To garner support from policymakers, carbon-penalty proponents have advocated for policies that recycle revenues from carbon penalties to all or lower-income taxpayers in the form of payroll tax reductions or lump-sum payments. And yet some of these proposed policies run the risk of reducing the overall efficiency of the U.S. economy, which would lower the nation’s GDP and impede its economic growth.

    Which begs the question: Is there a sweet spot at which a national carbon-penalty revenue-recycling policy can both avoid inflicting economic harm on lower-income Americans at the household level and degrading economic efficiency at the national level?

    In search of that sweet spot, researchers at the MIT Joint Program on the Science and Policy of Global Change assess the economic impacts of four different carbon-penalty revenue-recycling policies: direct rebates from revenues to households via lump-sum transfers; indirect refunding of revenues to households via a proportional reduction in payroll taxes; direct rebates from revenues to households, but only for low- and middle-income groups, with remaining revenues recycled via a proportional reduction in payroll taxes; and direct, higher rebates for poor households, with remaining revenues recycled via a proportional reduction in payroll taxes.

    To perform the assessment, the Joint Program researchers integrate a U.S. economic model (MIT U.S. Regional Energy Policy) with a dataset (Bureau of Labor Statistics’ Consumer Expenditure Survey) providing consumption patterns and other socioeconomic characteristics for 15,000 U.S. households. Using the combined model, they evaluate the distributional impacts and potential trade-offs between economic equity and efficiency of all four carbon-penalty revenue-recycling policies.

    The researchers find that household rebates have progressive impacts on consumers’ financial well-being, with the greatest benefits going to the lowest-income households, while policies centered on improving the efficiency of the economy (e.g., payroll tax reductions) have slightly regressive household-level financial impacts. In a nutshell, the trade-off is between rebates that provide more equity and less economic efficiency versus tax cuts that deliver the opposite result. The latter two policy options, which combine rebates to lower-income households with payroll tax reductions, result in an optimal blend of sufficiently progressive financial results at the household level and economy efficiency at the national level. Results of the study are published in the journal Energy Economics.

    “We have determined that only a portion of carbon-tax revenues is needed to compensate low-income households and thus reduce inequality, while the rest can be used to improve the economy by reducing payroll or other distortionary taxes,” says Xaquin García-Muros, lead author of the study, a postdoc at the MIT Joint Program who is affiliated with the Basque Centre for Climate Change in Spain. “Therefore, we can eliminate potential trade-offs between efficiency and equity, and promote a just and efficient energy transition.”

    “If climate policies increase the gap between rich and poor households or reduce the affordability of energy services, then these policies might be rejected by the public and, as a result, attempts to decarbonize the economy will be less efficient,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “Our findings provide guidance to decision-makers to advance more well-designed policies that deliver economic benefits to the nation as a whole.” 

    The study’s novel integration of a national economic model with household microdata creates a new and powerful platform to further investigate key differences among households that can help inform policies aimed at a just transition to a low-carbon economy. More

  • in

    An energy-storage solution that flows like soft-serve ice cream

    Batteries made from an electrically conductive mixture the consistency of molasses could help solve a critical piece of the decarbonization puzzle. An interdisciplinary team from MIT has found that an electrochemical technology called a semisolid flow battery can be a cost-competitive form of energy storage and backup for variable renewable energy (VRE) sources such as wind and solar. The group’s research is described in a paper published in Joule.

    “The transition to clean energy requires energy storage systems of different durations for when the sun isn’t shining and the wind isn’t blowing,” says Emre Gençer, a research scientist with the MIT Energy Initiative (MITEI) and a member of the team. “Our work demonstrates that a semisolid flow battery could be a lifesaving as well as economical option when these VRE sources can’t generate power for a day or longer — in the case of natural disasters, for instance.”

    The rechargeable zinc-manganese dioxide (Zn-MnO2) battery the researchers created beat out other long-duration energy storage contenders. “We performed a comprehensive, bottom-up analysis to understand how the battery’s composition affects performance and cost, looking at all the trade-offs,” says Thaneer Malai Narayanan SM ’18, PhD ’21. “We showed that our system can be cheaper than others, and can be scaled up.”

    Narayanan, who conducted this work at MIT as part of his doctorate in mechanical engineering, is the lead author of the paper. Additional authors include Gençer, Yunguang Zhu, a postdoc in the MIT Electrochemical Energy Lab; Gareth McKinley, the School of Engineering Professor of Teaching Innovation and professor of mechanical engineering at MIT; and Yang Shao-Horn, the JR East Professor of Engineering, a professor of mechanical engineering and of materials science and engineering, and a member of the Research Laboratory of Electronics (RLE), who directs the MIT Electrochemical Energy Lab.

    Going with the flow

    In 2016, Narayanan began his graduate studies, joining the Electrochemical Energy Lab, a hotbed of research and exploration of solutions to mitigate climate change, which is centered on innovative battery chemistry and decarbonizing fuels and chemicals. One exciting opportunity for the lab: developing low- and no-carbon backup energy systems suitable for grid-scale needs when VRE generation flags.                                                  

    While the lab cast a wide net, investigating energy conversion and storage using solid oxide fuel cells, lithium-ion batteries, and metal-air batteries, among others, Narayanan took a particular interest in flow batteries. In these systems, two different chemical (electrolyte) solutions with either negative or positive ions are pumped from separate tanks, meeting across a membrane (called the stack). Here, the ion streams react, converting electrical energy to chemical energy — in effect, charging the battery. When there is demand for this stored energy, the solution gets pumped back to the stack to convert chemical energy into electrical energy again.

    The duration of time that flow batteries can discharge, releasing the stored electricity, is determined by the volume of positively and negatively charged electrolyte solutions streaming through the stack. In theory, as long as these solutions keep flowing, reacting, and converting the chemical energy to electrical energy, the battery systems can provide electricity.

    “For backup lasting more than a day, the architecture of flow batteries suggests they can be a cheap option,” says Narayanan. “You recharge the solution in the tanks from sun and wind power sources.” This renders the entire system carbon free.

    But while the promise of flow battery technologies has beckoned for at least a decade, the uneven performance and expense of materials required for these battery systems has slowed their implementation. So, Narayanan set out on an ambitious journey: to design and build a flow battery that could back up VRE systems for a day or more, storing and discharging energy with the same or greater efficiency than backup rivals; and to determine, through rigorous cost analysis, whether such a system could prove economically viable as a long-duration energy option.

    Multidisciplinary collaborators

    To attack this multipronged challenge, Narayanan’s project brought together, in his words, “three giants, scientists all well-known in their fields”:  Shao-Horn, who specializes in chemical physics and electrochemical science, and design of materials; Gençer, who creates detailed economic models of emergent energy systems at MITEI; and McKinley, an expert in rheology, the physics of flow. These three also served as his thesis advisors.

    “I was excited to work in such an interdisciplinary team, which offered a unique opportunity to create a novel battery architecture by designing charge transfer and ion transport within flowable semi-solid electrodes, and to guide battery engineering using techno-economics of such flowable batteries,” says Shao-Horn.

    While other flow battery systems in contention, such as the vanadium redox flow battery, offer the storage capacity and energy density to back up megawatt and larger power systems, they depend on expensive chemical ingredients that make them bad bets for long duration purposes. Narayanan was on the hunt for less-pricey chemical components that also feature rich energy potential.

    Through a series of bench experiments, the researchers came up with a novel electrode (electrical conductor) for the battery system: a mixture containing dispersed manganese dioxide (MnO2) particles, shot through with an electrically conductive additive, carbon black. This compound reacts with a conductive zinc solution or zinc plate at the stack, enabling efficient electrochemical energy conversion. The fluid properties of this battery are far removed from the watery solutions used by other flow batteries.

    “It’s a semisolid — a slurry,” says Narayanan. “Like thick, black paint, or perhaps a soft-serve ice cream,” suggests McKinley. The carbon black adds the pigment and the electric punch. To arrive at the optimal electrochemical mix, the researchers tweaked their formula many times.

    “These systems have to be able to flow under reasonable pressures, but also have a weak yield stress so that the active MnO2 particles don’t sink to the bottom of the flow tanks when the system isn’t being used, as well as not separate into a battery/oily clear fluid phase and a dense paste of carbon particles and MnO2,” says McKinley.

    This series of experiments informed the technoeconomic analysis. By “connecting the dots between composition, performance, and cost,” says Narayanan, he and Gençer were able to make system-level cost and efficiency calculations for the Zn-MnO2 battery.

    “Assessing the cost and performance of early technologies is very difficult, and this was an example of how to develop a standard method to help researchers at MIT and elsewhere,” says Gençer. “One message here is that when you include the cost analysis at the development stage of your experimental work, you get an important early understanding of your project’s cost implications.”

    In their final round of studies, Gençer and Narayanan compared the Zn-MnO2 battery to a set of equivalent electrochemical battery and hydrogen backup systems, looking at the capital costs of running them at durations of eight, 24, and 72 hours. Their findings surprised them: For battery discharges longer than a day, their semisolid flow battery beat out lithium-ion batteries and vanadium redox flow batteries. This was true even when factoring in the heavy expense of pumping the MnO2 slurry from tank to stack. “I was skeptical, and not expecting this battery would be competitive, but once I did the cost calculation, it was plausible,” says Gençer.

    But carbon-free battery backup is a very Goldilocks-like business: Different situations require different-duration solutions, whether an anticipated overnight loss of solar power, or a longer-term, climate-based disruption in the grid. “Lithium-ion is great for backup of eight hours and under, but the materials are too expensive for longer periods,” says Gençer. “Hydrogen is super expensive for very short durations, and good for very long durations, and we will need all of them.” This means it makes sense to continue working on the Zn-MnO2 system to see where it might fit in.

    “The next step is to take our battery system and build it up,” says Narayanan, who is working now as a battery engineer. “Our research also points the way to other chemistries that could be developed under the semi-solid flow battery platform, so we could be seeing this kind of technology used for energy storage in our lifetimes.”

    This research was supported by Eni S.p.A. through MITEI. Thaneer Malai Narayanan received an Eni-sponsored MIT Energy Fellowship during his work on the project. More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More

  • in

    Coupling power and hydrogen sector pathways to benefit decarbonization

    Governments and companies worldwide are increasing their investments in hydrogen research and development, indicating a growing recognition that hydrogen could play a significant role in meeting global energy system decarbonization goals. Since hydrogen is light, energy-dense, storable, and produces no direct carbon dioxide emissions at the point of use, this versatile energy carrier has the potential to be harnessed in a variety of ways in a future clean energy system.

    Often considered in the context of grid-scale energy storage, hydrogen has garnered renewed interest, in part due to expectations that our future electric grid will be dominated by variable renewable energy (VRE) sources such as wind and solar, as well as decreasing costs for water electrolyzers — both of which could make clean, “green” hydrogen more cost-competitive with fossil-fuel-based production. But hydrogen’s versatility as a clean energy fuel also makes it an attractive option to meet energy demand and to open pathways for decarbonization in hard-to-abate sectors where direct electrification is difficult, such as transportation, buildings, and industry.

    “We’ve seen a lot of progress and analysis around pathways to decarbonize electricity, but we may not be able to electrify all end uses. This means that just decarbonizing electricity supply is not sufficient, and we must develop other decarbonization strategies as well,” says Dharik Mallapragada, a research scientist at the MIT Energy Initiative (MITEI). “Hydrogen is an interesting energy carrier to explore, but understanding the role for hydrogen requires us to study the interactions between the electricity system and a future hydrogen supply chain.”

    In a recent paper, researchers from MIT and Shell present a framework to systematically study the role and impact of hydrogen-based technology pathways in a future low-carbon, integrated energy system, taking into account interactions with the electric grid and the spatio-temporal variations in energy demand and supply. The developed framework co-optimizes infrastructure investment and operation across the electricity and hydrogen supply chain under various emissions price scenarios. When applied to a Northeast U.S. case study, the researchers find this approach results in substantial benefits — in terms of costs and emissions reduction — as it takes advantage of hydrogen’s potential to provide the electricity system with a large flexible load when produced through electrolysis, while also enabling decarbonization of difficult-to-electrify, end-use sectors.

    The research team includes Mallapragada; Guannan He, a postdoc at MITEI; Abhishek Bose, a graduate research assistant at MITEI; Clara Heuberger-Austin, a researcher at Shell; and Emre Gençer, a research scientist at MITEI. Their findings are published in the journal Energy & Environmental Science.

    Cross-sector modeling

    “We need a cross-sector framework to analyze each energy carrier’s economics and role across multiple systems if we are to really understand the cost/benefits of direct electrification or other decarbonization strategies,” says He.

    To do that analysis, the team developed the Decision Optimization of Low-carbon Power-HYdrogen Network (DOLPHYN) model, which allows the user to study the role of hydrogen in low-carbon energy systems, the effects of coupling the power and hydrogen sectors, and the trade-offs between various technology options across both supply chains — spanning production, transport, storage, and end use, and their impact on decarbonization goals.

    “We are seeing great interest from industry and government, because they are all asking questions about where to invest their money and how to prioritize their decarbonization strategies,” says Gençer. Heuberger-Austin adds, “Being able to assess the system-level interactions between electricity and the emerging hydrogen economy is of paramount importance to drive technology development and support strategic value chain decisions. The DOLPHYN model can be instrumental in tackling those kinds of questions.”

    For a predefined set of electricity and hydrogen demand scenarios, the model determines the least-cost technology mix across the power and hydrogen sectors while adhering to a variety of operation and policy constraints. The model can incorporate a range of technology options — from VRE generation to carbon capture and storage (CCS) used with both power and hydrogen generation to trucks and pipelines used for hydrogen transport. With its flexible structure, the model can be readily adapted to represent emerging technology options and evaluate their long-term value to the energy system.

    As an important addition, the model takes into account process-level carbon emissions by allowing the user to add a cost penalty on emissions in both sectors. “If you have a limited emissions budget, we are able to explore the question of where to prioritize the limited emissions to get the best bang for your buck in terms of decarbonization,” says Mallapragada.

    Insights from a case study

    To test their model, the researchers investigated the Northeast U.S. energy system under a variety of demand, technology, and carbon price scenarios. While their major conclusions can be generalized for other regions, the Northeast proved to be a particularly interesting case study. This region has current legislation and regulatory support for renewable generation, as well as increasing emission-reduction targets, a number of which are quite stringent. It also has a high demand for energy for heating — a sector that is difficult to electrify and could particularly benefit from hydrogen and from coupling the power and hydrogen systems.

    The researchers find that when combining the power and hydrogen sectors through electrolysis or hydrogen-based power generation, there is more operational flexibility to support VRE integration in the power sector and a reduced need for alternative grid-balancing supply-side resources such as battery storage or dispatchable gas generation, which in turn reduces the overall system cost. This increased VRE penetration also leads to a reduction in emissions compared to scenarios without sector-coupling. “The flexibility that electricity-based hydrogen production provides in terms of balancing the grid is as important as the hydrogen it is going to produce for decarbonizing other end uses,” says Mallapragada. They found this type of grid interaction to be more favorable than conventional hydrogen-based electricity storage, which can incur additional capital costs and efficiency losses when converting hydrogen back to power. This suggests that the role of hydrogen in the grid could be more beneficial as a source of flexible demand than as storage.

    The researchers’ multi-sector modeling approach also highlighted that CCS is more cost-effective when utilized in the hydrogen supply chain, versus the power sector. They note that counter to this observation, by the end of the decade, six times more CCS projects will be deployed in the power sector than for use in hydrogen production — a fact that emphasizes the need for more cross-sectoral modeling when planning future energy systems.

    In this study, the researchers tested the robustness of their conclusions against a number of factors, such as how the inclusion of non-combustion greenhouse gas emissions (including methane emissions) from natural gas used in power and hydrogen production impacts the model outcomes. They find that including the upstream emissions footprint of natural gas within the model boundary does not impact the value of sector coupling in regards to VRE integration and cost savings for decarbonization; in fact, the value actually grows because of the increased emphasis on electricity-based hydrogen production over natural gas-based pathways.

    “You cannot achieve climate targets unless you take a holistic approach,” says Gençer. “This is a systems problem. There are sectors that you cannot decarbonize with electrification, and there are other sectors that you cannot decarbonize without carbon capture, and if you think about everything together, there is a synergistic solution that significantly minimizes the infrastructure costs.”

    This research was supported, in part, by Shell Global Solutions International B.V. in Amsterdam, the Netherlands, and MITEI’s Low-Carbon Energy Centers for Electric Power Systems and Carbon Capture, Utilization, and Storage. More

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More

  • in

    Designing better batteries for electric vehicles

    The urgent need to cut carbon emissions is prompting a rapid move toward electrified mobility and expanded deployment of solar and wind on the electric grid. If those trends escalate as expected, the need for better methods of storing electrical energy will intensify.

    “We need all the strategies we can get to address the threat of climate change,” says Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. “Obviously, developing technologies for grid-based storage at a large scale is critical. But for mobile applications — in particular, transportation — much research is focusing on adapting today’s lithium-ion battery to make versions that are safer, smaller, and can store more energy for their size and weight.”

    Traditional lithium-ion batteries continue to improve, but they have limitations that persist, in part because of their structure. A lithium-ion battery consists of two electrodes — one positive and one negative — sandwiched around an organic (carbon-containing) liquid. As the battery is charged and discharged, electrically charged particles (or ions) of lithium pass from one electrode to the other through the liquid electrolyte.

    One problem with that design is that at certain voltages and temperatures, the liquid electrolyte can become volatile and catch fire. “Batteries are generally safe under normal usage, but the risk is still there,” says Kevin Huang PhD ’15, a research scientist in Olivetti’s group.

    Another problem is that lithium-ion batteries are not well-suited for use in vehicles. Large, heavy battery packs take up space and increase a vehicle’s overall weight, reducing fuel efficiency. But it’s proving difficult to make today’s lithium-ion batteries smaller and lighter while maintaining their energy density — that is, the amount of energy they store per gram of weight.

    To solve those problems, researchers are changing key features of the lithium-ion battery to make an all-solid, or “solid-state,” version. They replace the liquid electrolyte in the middle with a thin, solid electrolyte that’s stable at a wide range of voltages and temperatures. With that solid electrolyte, they use a high-capacity positive electrode and a high-capacity, lithium metal negative electrode that’s far thinner than the usual layer of porous carbon. Those changes make it possible to shrink the overall battery considerably while maintaining its energy-storage capacity, thereby achieving a higher energy density.

    “Those features — enhanced safety and greater energy density — are probably the two most-often-touted advantages of a potential solid-state battery,” says Huang. He then quickly clarifies that “all of these things are prospective, hoped-for, and not necessarily realized.” Nevertheless, the possibility has many researchers scrambling to find materials and designs that can deliver on that promise.

    Thinking beyond the lab

    Researchers have come up with many intriguing options that look promising — in the lab. But Olivetti and Huang believe that additional practical considerations may be important, given the urgency of the climate change challenge. “There are always metrics that we researchers use in the lab to evaluate possible materials and processes,” says Olivetti. Examples might include energy-storage capacity and charge/discharge rate. When performing basic research — which she deems both necessary and important — those metrics are appropriate. “But if the aim is implementation, we suggest adding a few metrics that specifically address the potential for rapid scaling,” she says.

    Based on industry’s experience with current lithium-ion batteries, the MIT researchers and their colleague Gerbrand Ceder, the Daniel M. Tellep Distinguished Professor of Engineering at the University of California at Berkeley, suggest three broad questions that can help identify potential constraints on future scale-up as a result of materials selection. First, with this battery design, could materials availability, supply chains, or price volatility become a problem as production scales up? (Note that the environmental and other concerns raised by expanded mining are outside the scope of this study.) Second, will fabricating batteries from these materials involve difficult manufacturing steps during which parts are likely to fail? And third, do manufacturing measures needed to ensure a high-performance product based on these materials ultimately lower or raise the cost of the batteries produced?

    To demonstrate their approach, Olivetti, Ceder, and Huang examined some of the electrolyte chemistries and battery structures now being investigated by researchers. To select their examples, they turned to previous work in which they and their collaborators used text- and data-mining techniques to gather information on materials and processing details reported in the literature. From that database, they selected a few frequently reported options that represent a range of possibilities.

    Materials and availability

    In the world of solid inorganic electrolytes, there are two main classes of materials — the oxides, which contain oxygen, and the sulfides, which contain sulfur. Olivetti, Ceder, and Huang focused on one promising electrolyte option in each class and examined key elements of concern for each of them.

    The sulfide they considered was LGPS, which combines lithium, germanium, phosphorus, and sulfur. Based on availability considerations, they focused on the germanium, an element that raises concerns in part because it’s not generally mined on its own. Instead, it’s a byproduct produced during the mining of coal and zinc.

    To investigate its availability, the researchers looked at how much germanium was produced annually in the past six decades during coal and zinc mining and then at how much could have been produced. The outcome suggested that 100 times more germanium could have been produced, even in recent years. Given that supply potential, the availability of germanium is not likely to constrain the scale-up of a solid-state battery based on an LGPS electrolyte.

    The situation looked less promising with the researchers’ selected oxide, LLZO, which consists of lithium, lanthanum, zirconium, and oxygen. Extraction and processing of lanthanum are largely concentrated in China, and there’s limited data available, so the researchers didn’t try to analyze its availability. The other three elements are abundantly available. However, in practice, a small quantity of another element — called a dopant — must be added to make LLZO easy to process. So the team focused on tantalum, the most frequently used dopant, as the main element of concern for LLZO.

    Tantalum is produced as a byproduct of tin and niobium mining. Historical data show that the amount of tantalum produced during tin and niobium mining was much closer to the potential maximum than was the case with germanium. So the availability of tantalum is more of a concern for the possible scale-up of an LLZO-based battery.

    But knowing the availability of an element in the ground doesn’t address the steps required to get it to a manufacturer. So the researchers investigated a follow-on question concerning the supply chains for critical elements — mining, processing, refining, shipping, and so on. Assuming that abundant supplies are available, can the supply chains that deliver those materials expand quickly enough to meet the growing demand for batteries?

    In sample analyses, they looked at how much supply chains for germanium and tantalum would need to grow year to year to provide batteries for a projected fleet of electric vehicles in 2030. As an example, an electric vehicle fleet often cited as a goal for 2030 would require production of enough batteries to deliver a total of 100 gigawatt hours of energy. To meet that goal using just LGPS batteries, the supply chain for germanium would need to grow by 50 percent from year to year — a stretch, since the maximum growth rate in the past has been about 7 percent. Using just LLZO batteries, the supply chain for tantalum would need to grow by about 30 percent — a growth rate well above the historical high of about 10 percent.

    Those examples demonstrate the importance of considering both materials availability and supply chains when evaluating different solid electrolytes for their scale-up potential. “Even when the quantity of a material available isn’t a concern, as is the case with germanium, scaling all the steps in the supply chain to match the future production of electric vehicles may require a growth rate that’s literally unprecedented,” says Huang.

    Materials and processing

    In assessing the potential for scale-up of a battery design, another factor to consider is the difficulty of the manufacturing process and how it may impact cost. Fabricating a solid-state battery inevitably involves many steps, and a failure at any step raises the cost of each battery successfully produced. As Huang explains, “You’re not shipping those failed batteries; you’re throwing them away. But you’ve still spent money on the materials and time and processing.”

    As a proxy for manufacturing difficulty, Olivetti, Ceder, and Huang explored the impact of failure rate on overall cost for selected solid-state battery designs in their database. In one example, they focused on the oxide LLZO. LLZO is extremely brittle, and at the high temperatures involved in manufacturing, a large sheet that’s thin enough to use in a high-performance solid-state battery is likely to crack or warp.

    To determine the impact of such failures on cost, they modeled four key processing steps in assembling LLZO-based batteries. At each step, they calculated cost based on an assumed yield — that is, the fraction of total units that were successfully processed without failing. With the LLZO, the yield was far lower than with the other designs they examined; and, as the yield went down, the cost of each kilowatt-hour (kWh) of battery energy went up significantly. For example, when 5 percent more units failed during the final cathode heating step, cost increased by about $30/kWh — a nontrivial change considering that a commonly accepted target cost for such batteries is $100/kWh. Clearly, manufacturing difficulties can have a profound impact on the viability of a design for large-scale adoption.

    Materials and performance

    One of the main challenges in designing an all-solid battery comes from “interfaces” — that is, where one component meets another. During manufacturing or operation, materials at those interfaces can become unstable. “Atoms start going places that they shouldn’t, and battery performance declines,” says Huang.

    As a result, much research is devoted to coming up with methods of stabilizing interfaces in different battery designs. Many of the methods proposed do increase performance; and as a result, the cost of the battery in dollars per kWh goes down. But implementing such solutions generally involves added materials and time, increasing the cost per kWh during large-scale manufacturing.

    To illustrate that trade-off, the researchers first examined their oxide, LLZO. Here, the goal is to stabilize the interface between the LLZO electrolyte and the negative electrode by inserting a thin layer of tin between the two. They analyzed the impacts — both positive and negative — on cost of implementing that solution. They found that adding the tin separator increases energy-storage capacity and improves performance, which reduces the unit cost in dollars/kWh. But the cost of including the tin layer exceeds the savings so that the final cost is higher than the original cost.

    In another analysis, they looked at a sulfide electrolyte called LPSCl, which consists of lithium, phosphorus, and sulfur with a bit of added chlorine. In this case, the positive electrode incorporates particles of the electrolyte material — a method of ensuring that the lithium ions can find a pathway through the electrolyte to the other electrode. However, the added electrolyte particles are not compatible with other particles in the positive electrode — another interface problem. In this case, a standard solution is to add a “binder,” another material that makes the particles stick together.

    Their analysis confirmed that without the binder, performance is poor, and the cost of the LPSCl-based battery is more than $500/kWh. Adding the binder improves performance significantly, and the cost drops by almost $300/kWh. In this case, the cost of adding the binder during manufacturing is so low that essentially all the of the cost decrease from adding the binder is realized. Here, the method implemented to solve the interface problem pays off in lower costs.

    The researchers performed similar studies of other promising solid-state batteries reported in the literature, and their results were consistent: The choice of battery materials and processes can affect not only near-term outcomes in the lab but also the feasibility and cost of manufacturing the proposed solid-state battery at the scale needed to meet future demand. The results also showed that considering all three factors together — availability, processing needs, and battery performance — is important because there may be collective effects and trade-offs involved.

    Olivetti is proud of the range of concerns the team’s approach can probe. But she stresses that it’s not meant to replace traditional metrics used to guide materials and processing choices in the lab. “Instead, it’s meant to complement those metrics by also looking broadly at the sorts of things that could get in the way of scaling” — an important consideration given what Huang calls “the urgent ticking clock” of clean energy and climate change.

    This research was supported by the Seed Fund Program of the MIT Energy Initiative (MITEI) Low-Carbon Energy Center for Energy Storage; by Shell, a founding member of MITEI; and by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, under the Advanced Battery Materials Research Program. The text mining work was supported by the National Science Foundation, the Office of Naval Research, and MITEI.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Electrifying cars and light trucks to meet Paris climate goals

    On Aug. 5, the White House announced that it seeks to ensure that 50 percent of all new passenger vehicles sold in the United States by 2030 are powered by electricity. The purpose of this target is to enable the U.S to remain competitive with China in the growing electric vehicle (EV) market and meet its international climate commitments. Setting ambitious EV sales targets and transitioning to zero-carbon power sources in the United States and other nations could lead to significant reductions in carbon dioxide and other greenhouse gas emissions in the transportation sector and move the world closer to achieving the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius relative to preindustrial levels.

    At this time, electrification of the transportation sector is occurring primarily in private light-duty vehicles (LDVs). In 2020, the global EV fleet exceeded 10 million, but that’s a tiny fraction of the cars and light trucks on the road. How much of the LDV fleet will need to go electric to keep the Paris climate goal in play? 

    To help answer that question, researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Energy Initiative have assessed the potential impacts of global efforts to reduce carbon dioxide emissions on the evolution of LDV fleets over the next three decades.

    Using an enhanced version of the multi-region, multi-sector MIT Economic Projection and Policy Analysis (EPPA) model that includes a representation of the household transportation sector, they projected changes for the 2020-50 period in LDV fleet composition, carbon dioxide emissions, and related impacts for 18 different regions. Projections were generated under four increasingly ambitious climate mitigation scenarios: a “Reference” scenario based on current market trends and fuel efficiency policies, a “Paris Forever” scenario in which current Paris Agreement commitments (Nationally Determined Contributions, or NDCs) are maintained but not strengthened after 2030, a “Paris to 2 C” scenario in which decarbonization actions are enhanced to be consistent with capping global warming at 2 C, and an “Accelerated Actions” scenario the caps global warming at 1.5 C through much more aggressive emissions targets than the current NDCs.

    Based on projections spanning the first three scenarios, the researchers found that the global EV fleet will likely grow to about 95-105 million EVs by 2030, and 585-823 million EVs by 2050. In the Accelerated Actions scenario, global EV stock reaches more than 200 million vehicles in 2030, and more than 1 billion in 2050, accounting for two-thirds of the global LDV fleet. The research team also determined that EV uptake will likely grow but vary across regions over the 30-year study time frame, with China, the United States, and Europe remaining the largest markets. Finally, the researchers found that while EVs play a role in reducing oil use, a more substantial reduction in oil consumption comes from economy-wide carbon pricing. The results appear in a study in the journal Economics of Energy & Environmental Policy.

    “Our study shows that EVs can contribute significantly to reducing global carbon emissions at a manageable cost,” says MIT Joint Program Deputy Director and MIT Energy Initiative Senior Research Scientist Sergey Paltsev, the lead author. “We hope that our findings will help decision-makers to design efficient pathways to reduce emissions.”  

    To boost the EV share of the global LDV fleet, the study’s co-authors recommend more ambitious policies to mitigate climate change and decarbonize the electric grid. They also envision an “integrated system approach” to transportation that emphasizes making internal combustion engine vehicles more efficient, a long-term shift to low- and net-zero carbon fuels, and systemic efficiency improvements through digitalization, smart pricing, and multi-modal integration. While the study focuses on EV deployment, the authors also stress for the need for investment in all possible decarbonization options related to transportation, including enhancing public transportation, avoiding urban sprawl through strategic land-use planning, and reducing the use of private motorized transport by mode switching to walking, biking, and mass transit.

    This research is an extension of the authors’ contribution to the MIT Mobility of the Future study. More