More stories

  • in

    3 Questions: What a single car can say about traffic

    Vehicle traffic has long defied description. Once measured roughly through visual inspection and traffic cameras, new smartphone crowdsourcing tools are now quantifying traffic far more precisely. This popular method, however, also presents a problem: Accurate measurements require a lot of data and users.

    Meshkat Botshekan, an MIT PhD student in civil and environmental engineering and research assistant at the MIT Concrete Sustainability Hub, has sought to expand on crowdsourcing methods by looking into the physics of traffic. During his time as a doctoral candidate, he has helped develop Carbin, a smartphone-based roadway crowdsourcing tool created by MIT CSHub and the University of Massachusetts Dartmouth, and used its data to offer more insight into the physics of traffic — from the formation of traffic jams to the inference of traffic phase and driving behavior. Here, he explains how recent findings can allow smartphones to infer traffic properties from the measurements of a single vehicle.  

    Q: Numerous navigation apps already measure traffic. Why do we need alternatives?

    A: Traffic characteristics have always been tough to measure. In the past, visual inspection and cameras were used to produce traffic metrics. So, there’s no denying that today’s navigation tools apps offer a superior alternative. Yet even these modern tools have gaps.

    Chief among them is their dependence on spatially distributed user counts: Essentially, these apps tally up their users on road segments to estimate the density of traffic. While this approach may seem adequate, it is both vulnerable to manipulation, as demonstrated in some viral videos, and requires immense quantities of data for reliable estimates. Processing these data is so time- and resource-intensive that, despite their availability, they can’t be used to quantify traffic effectively across a whole road network. As a result, this immense quantity of traffic data isn’t actually optimal for traffic management.

    Q: How could new technologies improve how we measure traffic?

    A: New alternatives have the potential to offer two improvements over existing methods: First, they can extrapolate far more about traffic with far fewer data. Second, they can cost a fraction of the price while offering a far simpler method of data collection. Just like Waze and Google Maps, they rely on crowdsourcing data from users. Yet, they are grounded in the incorporation of high-level statistical physics into data analysis.

    For instance, the Carbin app, which we are developing in collaboration with UMass Dartmouth, applies principles of statistical physics to existing traffic models to entirely forgo the need for user counts. Instead, it can infer traffic density and driver behavior using the input of a smartphone mounted in single vehicle.

    The method at the heart of the app, which was published last fall in Physical Review E, treats vehicles like particles in a many-body system. Just as the behavior of a closed many-body system can be understood through observing the behavior of an individual particle relying on the ergodic theorem of statistical physics, we can characterize traffic through the fluctuations in speed and position of a single vehicle across a road. As a result, we can infer the behavior and density of traffic on a segment of a road.

    As far less data is required, this method is more rapid and makes data management more manageable. But most importantly, it also has the potential to make traffic data less expensive and accessible to those that need it.

    Q: Who are some of the parties that would benefit from new technologies?

    A: More accessible and sophisticated traffic data would benefit more than just drivers seeking smoother, faster routes. It would also enable state and city departments of transportation (DOTs) to make local and collective interventions that advance the critical transportation objectives of equity, safety, and sustainability.

    As a safety solution, new data collection technologies could pinpoint dangerous driving conditions on a much finer scale to inform improved traffic calming measures. And since socially vulnerable communities experience traffic violence disproportionately, these interventions would have the added benefit of addressing pressing equity concerns. 

    There would also be an environmental benefit. DOTs could mitigate vehicle emissions by identifying minute deviations in traffic flow. This would present them with more opportunities to mitigate the idling and congestion that generate excess fuel consumption.  

    As we’ve seen, these three challenges have become increasingly acute, especially in urban areas. Yet, the data needed to address them exists already — and is being gathered by smartphones and telematics devices all over the world. So, to ensure a safer, more sustainable road network, it will be crucial to incorporate these data collection methods into our decision-making. More

  • in

    Students dive into research with the MIT Climate and Sustainability Consortium

    Throughout the fall 2021 semester, the MIT Climate and Sustainability Consortium (MCSC) supported several research projects with a climate-and-sustainability topic related to the consortium, through the MIT Undergraduate Research Opportunities Program (UROP). These students, who represent a range of disciplines, had the opportunity to work with MCSC Impact Fellows on topics related directly to the ongoing work and collaborations with MCSC member companies and the broader MIT community, from carbon capture to value-chain resilience to biodegradables. Many of these students are continuing their work this spring semester.

    Hannah Spilman, who is studying chemical engineering, worked with postdoc Glen Junor, an MCSC Impact Fellow, to investigate carbon capture, utilization, and storage (CCUS), with the goal of facilitating CCUS on a gigaton scale, a much larger capacity than what currently exists. “Scientists agree CCUS will be an important tool in combating climate change, but the largest CCUS facility only captures CO2 on a megaton scale, and very few facilities are actually operating,” explains Spilman. 

    Throughout her UROP, she worked on analyzing the currently deployed technology in the CCUS field, using National Carbon Capture Center post-combustion project reports to synthesize the results and outline those technologies. Examining projects like the RTI-NAS experiment, which showcased innovation with carbon capture technology, was especially helpful. “We must first understand where we are, and as we continue to conduct analyses, we will be able to understand the field’s current state and path forward,” she concludes.

    Fellow chemical engineering students Claire Kim and Alfonso Restrepo are working with postdoc and MCSC Impact Fellow Xiangkun (Elvis) Cao, also on investigating CCUS technology. Kim’s focus is on life cycle assessment (LCA), while Restrepo’s focus is on techno-economic assessment (TEA). They have been working together to use the two tools to evaluate multiple CCUS technologies. While LCA and TEA are not new tools themselves, their application in CCUS has not been comprehensively defined and described. “CCUS can play an important role in the flexible, low-carbon energy systems,” says Kim, which was part of the motivation behind her project choice.

    Through TEA, Restrepo has been investigating how various startups and larger companies are incorporating CCUS technology in their processes. “In order to reduce CO2 emissions before it’s too late to act, there is a strong need for resources that effectively evaluate CCUS technology, to understand the effectiveness and viability of emerging technology for future implementation,” he explains. For their next steps, Kim and Restrepo will apply LCA and TEA to the analysis of a specific capture (for example, direct ocean capture) or conversion (for example, CO2-to-fuel conversion) process​ in CCUS.

    Cameron Dougal, a first-year student, and James Santoro, studying management, both worked with postdoc and MCSC Impact Fellow Paloma Gonzalez-Rojas on biodegradable materials. Dougal explored biodegradable packaging film in urban systems. “I have had a longstanding interest in sustainability, with a newer interest in urban planning and design, which motivated me to work on this project,” Dougal says. “Bio-based plastics are a promising step for the future.”

    Dougal spent time conducting internet and print research, as well as speaking with faculty on their relevant work. From these efforts, Dougal has identified important historical context for the current recycling landscape — as well as key case studies and cities around the world to explore further. In addition to conducting more research, Dougal plans to create a summary and statistic sheet.

    Santoro dove into the production angle, working on evaluating the economic viability of the startups that are creating biodegradable materials. “Non-renewable plastics (created with fossil fuels) continue to pollute and irreparably damage our environment,” he says. “As we look for innovative solutions, a key question to answer is how can we determine a more effective way to evaluate the economic viability and probability of success for new startups and technologies creating biodegradable plastics?” The project aims to develop an effective framework to begin to answer this.

    At this point, Santoro has been understanding the overall ecosystem, understanding how these biodegradable materials are developed, and analyzing the economics side of things. He plans to have conversations with company founders, investors, and experts, and identify major challenges for biodegradable technology startups in creating high performance products with attractive unit economics. There is also still a lot to research about new technologies and trends in the industry, the profitability of different products, as well as specific individual companies doing this type of work.

    Tess Buchanan, who is studying materials science and engineering, is working with Katharina Fransen and Sarah Av-Ron, MIT graduate students in the Department of Chemical Engineering, and principal investigator Professor Bradley Olsen, to also explore biodegradables by looking into their development from biomass “This is critical work, given the current plastics sustainability crisis, and the potential of bio-based polymers,” Buchanan says.

    The objective of the project is to explore new sustainable polymers through a biodegradation assay using clear zone growth analysis to yield degradation rates. For next steps, Buchanan is diving into synthesis expansion and using machine learning to understand the relationship between biodegradation and polymer chemistry.

    Kezia Hector, studying chemical engineering, and Tamsin Nottage, a first-year student, working with postdoc and MCSC Impact Fellow Sydney Sroka, explored advancing and establishing sustainable solutions for value chain resilience. Hector’s focus was understanding how wildfires can affect supply chains, specifically identifying sources of economic loss. She reviewed academic literature and news articles, and looked at the Amazon, California, Siberia, and Washington, finding that wildfires cause millions of dollars in damage every year and impact supply chains by cutting off or slowing down freight activity. She will continue to identify ways to make supply chains more resilient and sustainable.

    Nottage focused on the economic impact of typhoons, closely studying Typhoon Mangkhut, a powerful and catastrophic tropical cyclone that caused extensive damages of $593 million in Guam, the Philippines, and South China in September 2018. “As a Bahamian, I’ve witnessed the ferocity of hurricanes and challenges of rebuilding after them,” says Nottage. “I used this project to identify the tropical cyclones that caused the most extensive damage for further investigation.”She compiled the causes of damage and their costs to inform targets of supply chain resiliency reform (shipping, building materials, power supply, etc.). As a next step, Nottage will focus on modeling extreme events like Mangkunt to develop frameworks that companies can learn from and utilize to build more sustainable supply chains in the future.

    Ellie Vaserman, a first-year student working with postdoc and MCSC Impact Fellow Poushali Maji, also explored a topic related to value chains: unlocking circularity across the entire value chain through quality improvement, inclusive policy, and behavior to improve materials recovery. Specifically, her objectives have been to learn more about methods of chemolysis and the viability of their products, to compare methods of chemical recycling of polyethylene terephthalate (PET) using quantitative metrics, and to design qualitative visuals to make the steps in PET chemical recycling processes more understandable.

    To do so, she conducted a literature review to identify main methods of chemolysis that are utilized in the field (and collect data about these methods) and created graphics for some of the more common processes. Moving forward, she hopes to compare the processes using other metrics and research the energy intensity of the monomer purification processes.

    The work of these students, as well as many others, continued over MIT’s Independent Activities Period in January. More

  • in

    Energizing communities in Africa

    Growing up in Lagos, Nigeria, Ayomikun Ayodeji enjoyed the noisy hustle and bustle of his neighborhood. The cacophony included everything from vendors hawking water sachets and mini sausages, to commuters shouting for the next bus.

    Another common sound was the cry of “Up NEPA!” — an acronym for the Nigerian Electrical Power Authority — which Ayodeji would chant in unison with other neighborhood children when power had been restored after an outage. He remembers these moments fondly because, despite the difficulties of the frequent outages, the call also meant that people finally did have long-awaited electricity in their homes.

    “I grew up without reliable electricity, so power is something I’ve always been interested in,” says Ayodeji, who is now a senior studying chemical engineering. He hopes to use the knowledge he has gained during his time at MIT to expand energy access in his home country and elsewhere in Africa.

    Before coming to MIT, Ayodeji spent two years in Italy at United World College, where he embarked on chemistry projects, specifically focusing on dye-sensitized solar cells. He then transferred to the Institute, seeking a more technical grounding. He hoped that the knowledge gained in and out of the classroom would equip him with the tools to help combat the energy crisis in Lagos.

    “The questions that remained in the back of my mind were: How can I give back to the community I came from? How can I use the resources around me to help others?”  he says.

    This community-oriented mindset led Ayodeji to team up with a group of friends and brainstorm ideas for how they could help communities close to them. They eventually partnered with the Northeast Children’s Trust (NECT), an organization that helps children affected by the extremist group Boko Haram. Ayodeji and his friends looked at how to expand NECT’s educational program, and decided to build an offline, portable classroom server with a repository of books, animations, and activities for students at the primary and secondary education levels. The project was sponsored by Davis Projects for Peace and MIT’s PKG Center.

    Because of travel restrictions, Ayodeji was the only member of his team able to fly to Nigeria in the summer of 2019 to facilitate installing the servers. He says he wished his team could have been there, but he appreciated the opportunity to speak with the children directly, inspired by their excitement to learn and grow. The experience reaffirmed Ayodeji’s desire to pursue social impact projects, especially in Nigeria.

    “We knew we hadn’t just taken a step in providing the kids with a well-rounded education, but we also supported the center, NECT, in raising the next generation of future leaders that would guide that region to a sustainable, peaceful future,” he says.

    Ayodeji has also sought out energy-related opportunities on campus, pursuing an undergraduate research program (UROP) in the Buonassisi Lab in his sophomore year. He was tasked with testing perovskite solar cells, which have the potential to reach high efficiencies at low production costs. He characterized the cells using X-ray diffraction, studying their stability and degradation pathways. While Ayodeji enjoyed his first experience doing hands-on energy research, he found he was more curious about how energy technologies were implemented to reach various communities. “I wanted to see how things were being done in the industry,” he says.

    In the summer after his sophomore year, Ayodeji interned with Pioneer Natural Resources, an independent oil and gas company in Texas. Ayodeji worked as part of the completions projects team to assess the impact of design changes on cluster efficiency, that is, how evenly fluid is distributed along the wellbore. By using fiberoptic and photographic data to analyze perforation erosion, he discovered ways to lower costs while maintaining environmental stability during completions. The experience taught Ayodeji about the corporate side of the energy industry and enabled him to observe how approaches to alternative energy sources differ across countries, especially in the U.S. and Nigeria.

    “Some developing economies don’t have the capacity to pour resources into expanding renewable energy infrastructure at the rate that most developed economies do. While it is important to think sustainably for the long run, it is also important for us to understand that a clean energy transition is not something that can be done overnight,” he says.

    Ayodeji also employs his community-oriented mindset on campus. He is currently the vice president of the African Students’ Association (ASA), where he formerly chaired the African Learning Circle, a weekly discussion panel spotlighting key development and innovation events taking place on the African continent. He is also involved with student outreach, both within the ASA and as an international orientation student coordinator for the International Students Office.

    As a member of Cru, a Christian community on campus, Ayodeji helps lead a bible study and says the group supports him as he navigates college life. “It is a wonderful community of people I explore faith with and truly lean on when things get tough,” he says.

    After graduating, Ayodeji plans to start work at Boston Consulting Group, where he interned last summer. He expects he’ll have opportunities to engage with private equity issues and tackle energy-related cases while learning more about where the industry is headed.

    His long-term goal is to help expand renewable energy access and production across the African continent.

    “A key element of what the world needs to develop and grow is access to reliable energy. I hope to keep expanding my problem-solving toolkit so that, one day, it can be useful in electrifying communities back home,” he says. More

  • in

    Preparing global online learners for the clean energy transition

    After a career devoted to making the electric power system more efficient and resilient, Marija Ilic came to MIT in 2018 eager not just to extend her research in new directions, but to prepare a new generation for the challenges of the clean-energy transition.

    To that end, Ilic, a senior research scientist in MIT’s Laboratory for Information and Decisions Systems (LIDS) and a senior staff member at Lincoln Laboratory in the Energy Systems Group, designed an edX course that captures her methods and vision: Principles of Modeling, Simulation, and Control for Electric Energy Systems.

    EdX is a provider of massive open online courses produced in partnership with MIT, Harvard University, and other leading universities. Ilic’s class made its online debut in June 2021, running for 12 weeks, and it is one of an expanding set of online courses funded by the MIT Energy Initiative (MITEI) to provide global learners with a view of the shifting energy landscape.

    Ilic first taught a version of the class while a professor at Carnegie Mellon University, rolled out a second iteration at MIT just as the pandemic struck, and then revamped the class for its current online presentation. But no matter the course location, Ilic focuses on a central theme: “With the need for decarbonization, which will mean accommodating new energy sources such as solar and wind, we must rethink how we operate power systems,” she says. “This class is about how to pose and solve the kinds of problems we will face during this transformation.”

    Hot global topic

    The edX class has been designed to welcome a broad mix of students. In summer 2021, more than 2,000 signed up from 109 countries, ranging from high school students to retirees. In surveys, some said they were drawn to the class by the opportunity to advance their knowledge of modeling. Many others hoped to learn about the move to decarbonize energy systems.

    “The energy transition is a hot topic everywhere in the world, not just in the U.S.,” says teaching assistant Miroslav Kosanic. “In the class, there were veterans of the oil industry and others working in investment and finance jobs related to energy who wanted to understand the potential impacts of changes in energy systems, as well as students from different fields and professors seeking to update their curricula — all gathered into a community.”

    Kosanic, who is currently a PhD student at MIT in electrical engineering and computer science, had taken this class remotely in the spring semester of 2021, while he was still in college in Serbia. “I knew I was interested in power systems, but this course was eye-opening for me, showing how to apply control theory and to model different components of these systems,” he says. “I finished the course and thought, this is just the beginning, and I’d like to learn a lot more.” Kosanic performed so well online that Ilic recruited him to MIT, as a LIDS researcher and edX course teaching assistant, where he grades homework assignments and moderates a lively learner community forum.

    A platform for problem-solving

    The course starts with fundamental concepts in electric power systems operations and management, and it steadily adds layers of complexity, posing real-world problems along the way. Ilic explains how voltage travels from point to point across transmission lines and how grid managers modulate systems to ensure that enough, but not too much, electricity flows. “To deliver power from one location to the next one, operators must constantly make adjustments to ensure that the receiving end can handle the voltage transmitted, optimizing voltage to avoid overheating the wires,” she says.

    In her early lectures, Ilic notes the fundamental constraints of current grid operations, organized around a hierarchy of regional managers dealing with a handful of very large oil, gas, coal, and nuclear power plants, and occupied primarily with the steady delivery of megawatt-hours to far-flung customers. But historically, this top-down structure doesn’t do a good job of preventing loss of energy due to sub-optimal transmission conditions or due to outages related to extreme weather events.

    These issues promise to grow for grid operators as distributed resources such as solar and wind enter the picture, Ilic tells students. In the United States, under new rules dictated by the Federal Energy Regulatory Commission, utilities must begin to integrate the distributed, intermittent electricity produced by wind farms, solar complexes, and even by homes and cars, which flows at voltages much lower than electricity produced by large power plants.

    Finding ways to optimize existing energy systems and to accommodate low- and zero-carbon energy sources requires powerful new modes of analysis and problem-solving. This is where Ilic’s toolbox comes in: a mathematical modeling strategy and companion software that simplifies the input and output of electrical systems, no matter how large or how small. “In the last part of the course, we take up modeling different solutions to electric service in a way that is technology-agnostic, where it only matters how much a black-box energy source produces, and the rates of production and consumption,” says Ilic.

    This black-box modeling approach, which Ilic pioneered in her research, enables students to see, for instance, “what is happening with their own household consumption, and how it affects the larger system,” says Rupamathi Jaddivada PhD ’20, a co-instructor of the edX class and a postdoc in electrical engineering and computer science. “Without getting lost in details of current or voltage, or how different components work, we think about electric energy systems as dynamical components interacting with each other, at different spatial scales.” This means that with just a basic knowledge of physical laws, high school and undergraduate students can take advantage of the course “and get excited about cleaner and more reliable energy,” adds Ilic.

    What Jaddivada and Ilic describe as “zoom in, zoom out” systems thinking leverages the ubiquity of digital communications and the so-called “internet of things.” Energy devices of all scales can link directly to other devices in a network instead of just to a central operations hub, allowing for real-time adjustments in voltage, for instance, vastly improving the potential for optimizing energy flows.

    “In the course, we discuss how information exchange will be key to integrating new end-to-end energy resources and, because of this interactivity, how we can model better ways of controlling entire energy networks,” says Ilic. “It’s a big lesson of the course to show the value of information and software in enabling us to decarbonize the system and build resilience, rather than just building hardware.”

    By the end of the course, students are invited to pursue independent research projects. Some might model the impact of a new energy source on a local grid or investigate different options for reducing energy loss in transmission lines.

    “It would be nice if they see that we don’t have to rely on hardware or large-scale solutions to bring about improved electric service and a clean and resilient grid, but instead on information technologies such as smart components exchanging data in real time, or microgrids in neighborhoods that sustain themselves even when they lose power,” says Ilic. “I hope students walk away convinced that it does make sense to rethink how we operate our basic power systems and that with systematic, physics-based modeling and IT methods we can enable better, more flexible operation in the future.”

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative More

  • in

    3 Questions: Anuradha Annaswamy on building smart infrastructures

    Much of Anuradha Annaswamy’s research hinges on uncertainty. How does cloudy weather affect a grid powered by solar energy? How do we ensure that electricity is delivered to the consumer if a grid is powered by wind and the wind does not blow? What’s the best course of action if a bird hits a plane engine on takeoff? How can you predict the behavior of a cyber attacker?

    A senior research scientist in MIT’s Department of Mechanical Engineering, Annaswamy spends most of her research time dealing with decision-making under uncertainty. Designing smart infrastructures that are resilient to uncertainty can lead to safer, more reliable systems, she says.

    Annaswamy serves as the director of MIT’s Active Adaptive Control Laboratory. A world-leading expert in adaptive control theory, she was named president of the Institute of Electrical and Electronics Engineers Control Systems Society for 2020. Her team uses adaptive control and optimization to account for various uncertainties and anomalies in autonomous systems. In particular, they are developing smart infrastructures in the energy and transportation sectors.

    Using a combination of control theory, cognitive science, economic modeling, and cyber-physical systems, Annaswamy and her team have designed intelligent systems that could someday transform the way we travel and consume energy. Their research includes a diverse range of topics such as safer autopilot systems on airplanes, the efficient dispatch of resources in electrical grids, better ride-sharing services, and price-responsive railway systems.

    In a recent interview, Annaswamy spoke about how these smart systems could help support a safer and more sustainable future.

    Q: How is your team using adaptive control to make air travel safer?

    A: We want to develop an advanced autopilot system that can safely recover the airplane in the event of a severe anomaly — such as the wing becoming damaged mid-flight, or a bird flying into the engine. In the airplane, you have a pilot and autopilot to make decisions. We’re asking: How do you combine those two decision-makers?

    The answer we landed on was developing a shared pilot-autopilot control architecture. We collaborated with David Woods, an expert in cognitive engineering at The Ohio State University, to develop an intelligent system that takes the pilot’s behavior into account. For example, all humans have something known as “capacity for maneuver” and “graceful command degradation” that inform how we react in the face of adversity. Using mathematical models of pilot behavior, we proposed a shared control architecture where the pilot and the autopilot work together to make an intelligent decision on how to react in the face of uncertainties. In this system, the pilot reports the anomaly to an adaptive autopilot system that ensures resilient flight control.

    Q: How does your research on adaptive control fit into the concept of smart cities?

    A: Smart cities are an interesting way we can use intelligent systems to promote sustainability. Our team is looking at ride-sharing services in particular. Services like Uber and Lyft have provided new transportation options, but their impact on the carbon footprint has to be considered. We’re looking at developing a system where the number of passenger-miles per unit of energy is maximized through something called “shared mobility on demand services.” Using the alternating minimization approach, we’ve developed an algorithm that can determine the optimal route for multiple passengers traveling to various destinations.

    As with the pilot-autopilot dynamic, human behavior is at play here. In sociology there is an interesting concept of behavioral dynamics known as Prospect Theory. If we give passengers options with regards to which route their shared ride service will take, we are empowering them with free will to accept or reject a route. Prospect Theory shows that if you can use pricing as an incentive, people are much more loss-averse so they would be willing to walk a bit extra or wait a few minutes longer to join a low-cost ride with an optimized route. If everyone utilized a system like this, the carbon footprint of ride-sharing services could decrease substantially.

    Q: What other ways are you using intelligent systems to promote sustainability?

    A: Renewable energy and sustainability are huge drivers for our research. To enable a world where all of our energy is coming from renewable sources like solar or wind, we need to develop a smart grid that can account for the fact that the sun isn’t always shining and wind isn’t always blowing. These uncertainties are the biggest hurdles to achieving an all-renewable grid. Of course, there are many technologies being developed for batteries that can help store renewable energy, but we are taking a different approach.

    We have created algorithms that can optimally schedule distributed energy resources within the grid — this includes making decisions on when to use onsite generators, how to operate storage devices, and when to call upon demand response technologies, all in response to the economics of using such resources and their physical constraints. If we can develop an interconnected smart grid where, for example, the air conditioning setting in a house is set to 72 degrees instead of 69 degrees automatically when demand is high, there could be a substantial savings in energy usage without impacting human comfort. In one of our studies, we applied a distributed proximal atomic coordination algorithm to the grid in Tokyo to demonstrate how this intelligent system could account for the uncertainties present in a grid powered by renewable resources. More

  • in

    Overcoming a bottleneck in carbon dioxide conversion

    If researchers could find a way to chemically convert carbon dioxide into fuels or other products, they might make a major dent in greenhouse gas emissions. But many such processes that have seemed promising in the lab haven’t performed as expected in scaled-up formats that would be suitable for use with a power plant or other emissions sources.

    Now, researchers at MIT have identified, quantified, and modeled a major reason for poor performance in such conversion systems. The culprit turns out to be a local depletion of the carbon dioxide gas right next to the electrodes being used to catalyze the conversion. The problem can be alleviated, the team found, by simply pulsing the current off and on at specific intervals, allowing time for the gas to build back up to the needed levels next to the electrode.

    The findings, which could spur progress on developing a variety of materials and designs for electrochemical carbon dioxide conversion systems, were published today in the journal Langmuir, in a paper by MIT postdoc Álvaro Moreno Soto, graduate student Jack Lake, and professor of mechanical engineering Kripa Varanasi.

    “Carbon dioxide mitigation is, I think, one of the important challenges of our time,” Varanasi says. While much of the research in the area has focused on carbon capture and sequestration, in which the gas is pumped into some kind of deep underground reservoir or converted to an inert solid such as limestone, another promising avenue has been converting the gas into other carbon compounds such as methane or ethanol, to be used as fuel, or ethylene, which serves as a precursor to useful polymers.

    There are several ways to do such conversions, including electrochemical, thermocatalytic, photothermal, or photochemical processes. “Each of these has problems or challenges,” Varanasi says. The thermal processes require very high temperature, and they don’t produce very high-value chemical products, which is a challenge with the light-activated processes as well, he says. “Efficiency is always at play, always an issue.”

    The team has focused on the electrochemical approaches, with a goal of getting “higher-C products” — compounds that contain more carbon atoms and tend to be higher-value fuels because of their energy per weight or volume. In these reactions, the biggest challenge has been curbing competing reactions that can take place at the same time, especially the splitting of water molecules into oxygen and hydrogen.

    The reactions take place as a stream of liquid electrolyte with the carbon dioxide dissolved in it passes over a metal catalytic surface that is electrically charged. But as the carbon dioxide gets converted, it leaves behind a region in the electrolyte stream where it has essentially been used up, and so the reaction within this depleted zone turns toward water splitting instead. This unwanted reaction uses up energy and greatly reduces the overall efficiency of the conversion process, the researchers found.

    “There’s a number of groups working on this, and a number of catalysts that are out there,” Varanasi says. “In all of these, I think the hydrogen co-evolution becomes a bottleneck.”

    One way of counteracting this depletion, they found, can be achieved by a pulsed system — a cycle of simply turning off the voltage, stopping the reaction and giving the carbon dioxide time to spread back into the depleted zone and reach usable levels again, and then resuming the reaction.

    Often, the researchers say, groups have found promising catalyst materials but haven’t run their lab tests long enough to observe these depletion effects, and thus have been frustrated in trying to scale up their systems. Furthermore, the concentration of carbon dioxide next to the catalyst dictates the products that are made. Hence, depletion can also change the mix of products that are produced and can make the process unreliable. “If you want to be able to make a system that works at industrial scale, you need to be able to run things over a long period of time,” Varanasi says, “and you need to not have these kinds of effects that reduce the efficiency or reliability of the process.”

    The team studied three different catalyst materials, including copper, and “we really focused on making sure that we understood and can quantify the depletion effects,” Lake says. In the process they were able to develop a simple and reliable way of monitoring the efficiency of the conversion process as it happens, by measuring the changing pH levels, a measure of acidity, in the system’s electrolyte.

    In their tests, they used more sophisticated analytical tools to characterize reaction products, including gas chromatography for analysis of the gaseous products, and nuclear magnetic resonance characterization for the system’s liquid products. But their analysis showed that the simple pH measurement of the electrolyte next to the electrode during operation could provide a sufficient measure of the efficiency of the reaction as it progressed.

    This ability to easily monitor the reaction in real-time could ultimately lead to a system optimized by machine-learning methods, controlling the production rate of the desired compounds through continuous feedback, Moreno Soto says.

    Now that the process is understood and quantified, other approaches to mitigating the carbon dioxide depletion might be developed, the researchers say, and could easily be tested using their methods.

    This work shows, Lake says, that “no matter what your catalyst material is” in such an electrocatalytic system, “you’ll be affected by this problem.” And now, by using the model they developed, it’s possible to determine exactly what kind of time window needs to be evaluated to get an accurate sense of the material’s overall efficiency and what kind of system operations could maximize its effectiveness.

    The research was supported by Shell, through the MIT Energy Initiative. More

  • in

    A dirt cheap solution? Common clay materials may help curb methane emissions

    Methane is a far more potent greenhouse gas than carbon dioxide, and it has a pronounced effect within first two decades of its presence in the atmosphere. In the recent international climate negotiations in Glasgow, abatement of methane emissions was identified as a major priority in attempts to curb global climate change quickly.

    Now, a team of researchers at MIT has come up with a promising approach to controlling methane emissions and removing it from the air, using an inexpensive and abundant type of clay called zeolite. The findings are described in the journal ACS Environment Au, in a paper by doctoral student Rebecca Brenneis, Associate Professor Desiree Plata, and two others.

    Although many people associate atmospheric methane with drilling and fracking for oil and natural gas, those sources only account for about 18 percent of global methane emissions, Plata says. The vast majority of emitted methane comes from such sources as slash-and-burn agriculture, dairy farming, coal and ore mining, wetlands, and melting permafrost. “A lot of the methane that comes into the atmosphere is from distributed and diffuse sources, so we started to think about how you could take that out of the atmosphere,” she says.

    The answer the researchers found was something dirt cheap — in fact, a special kind of “dirt,” or clay. They used zeolite clays, a material so inexpensive that it is currently used to make cat litter. Treating the zeolite with a small amount of copper, the team found, makes the material very effective at absorbing methane from the air, even at extremely low concentrations.

    The system is simple in concept, though much work remains on the engineering details. In their lab tests, tiny particles of the copper-enhanced zeolite material, similar to cat litter, were packed into a reaction tube, which was then heated from the outside as the stream of gas, with methane levels ranging from just 2 parts per million up to 2 percent concentration, flowed through the tube. That range covers everything that might exist in the atmosphere, down to subflammable levels that cannot be burned or flared directly.

    The process has several advantages over other approaches to removing methane from air, Plata says. Other methods tend to use expensive catalysts such as platinum or palladium, require high temperatures of at least 600 degrees Celsius, and tend to require complex cycling between methane-rich and oxygen-rich streams, making the devices both more complicated and more risky, as methane and oxygen are highly combustible on their own and in combination.

    “The 600 degrees where they run these reactors makes it almost dangerous to be around the methane,” as well as the pure oxygen, Brenneis says. “They’re solving the problem by just creating a situation where there’s going to be an explosion.” Other engineering complications also arise from the high operating temperatures. Unsurprisingly, such systems have not found much use.

    As for the new process, “I think we’re still surprised at how well it works,” says Plata, who is the Gilbert W. Winslow Associate Professor of Civil and Environmental Engineering. The process seems to have its peak effectiveness at about 300 degrees Celsius, which requires far less energy for heating than other methane capture processes. It also can work at concentrations of methane lower than other methods can address, even small fractions of 1 percent, which most methods cannot remove, and does so in air rather than pure oxygen, a major advantage for real-world deployment.

    The method converts the methane into carbon dioxide. That might sound like a bad thing, given the worldwide efforts to combat carbon dioxide emissions. “A lot of people hear ‘carbon dioxide’ and they panic; they say ‘that’s bad,’” Plata says. But she points out that carbon dioxide is much less impactful in the atmosphere than methane, which is about 80 times stronger as a greenhouse gas over the first 20 years, and about 25 times stronger for the first century. This effect arises from that fact that methane turns into carbon dioxide naturally over time in the atmosphere. By accelerating that process, this method would drastically reduce the near-term climate impact, she says. And, even converting half of the atmosphere’s methane to carbon dioxide would increase levels of the latter by less than 1 part per million (about 0.2 percent of today’s atmospheric carbon dioxide) while saving about 16 percent of total radiative warming.

    The ideal location for such systems, the team concluded, would be in places where there is a relatively concentrated source of methane, such as dairy barns and coal mines. These sources already tend to have powerful air-handling systems in place, since a buildup of methane can be a fire, health, and explosion hazard. To surmount the outstanding engineering details, the team has just been awarded a $2 million grant from the U.S. Department of Energy to continue to develop specific equipment for methane removal in these types of locations.

    “The key advantage of mining air is that we move a lot of it,” she says. “You have to pull fresh air in to enable miners to breathe, and to reduce explosion risks from enriched methane pockets. So, the volumes of air that are moved in mines are enormous.” The concentration of methane is too low to ignite, but it’s in the catalysts’ sweet spot, she says.

    Adapting the technology to specific sites should be relatively straightforward. The lab setup the team used in their tests consisted of  “only a few components, and the technology you would put in a cow barn could be pretty simple as well,” Plata says. However, large volumes of gas do not flow that easily through clay, so the next phase of the research will focus on ways of structuring the clay material in a multiscale, hierarchical configuration that will aid air flow.

    “We need new technologies for oxidizing methane at concentrations below those used in flares and thermal oxidizers,” says Rob Jackson, a professor of earth systems science at Stanford University, who was not involved in this work. “There isn’t a cost-effective technology today for oxidizing methane at concentrations below about 2,000 parts per million.”

    Jackson adds, “Many questions remain for scaling this and all similar work: How quickly will the catalyst foul under field conditions? Can we get the required temperatures closer to ambient conditions? How scaleable will such technologies be when processing large volumes of air?”

    One potential major advantage of the new system is that the chemical process involved releases heat. By catalytically oxidizing the methane, in effect the process is a flame-free form of combustion. If the methane concentration is above 0.5 percent, the heat released is greater than the heat used to get the process started, and this heat could be used to generate electricity.

    The team’s calculations show that “at coal mines, you could potentially generate enough heat to generate electricity at the power plant scale, which is remarkable because it means that the device could pay for itself,” Plata says. “Most air-capture solutions cost a lot of money and would never be profitable. Our technology may one day be a counterexample.”

    Using the new grant money, she says, “over the next 18 months we’re aiming to demonstrate a proof of concept that this can work in the field,” where conditions can be more challenging than in the lab. Ultimately, they hope to be able to make devices that would be compatible with existing air-handling systems and could simply be an extra component added in place. “The coal mining application is meant to be at a stage that you could hand to a commercial builder or user three years from now,” Plata says.

    In addition to Plata and Brenneis, the team included Yale University PhD student Eric Johnson and former MIT postdoc Wenbo Shi. The work was supported by the Gerstner Philanthropies, Vanguard Charitable Trust, the Betty Moore Inventor Fellows Program, and MIT’s Research Support Committee. More

  • in

    Seeing the plasma edge of fusion experiments in new ways with artificial intelligence

    To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

    Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs.

    To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma’s behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that run much faster, but with quantified levels of accuracy.

    For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

    “A successful theory is supposed to predict what you’re going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”

    In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.

    In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model’s predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.

    Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”

    These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

    “Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”

    Mathews sees exciting research ahead.

    “Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway,” he says. “But this is just the beginning.”

    Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.​ More