More stories

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

    Probing probabilities

    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

    Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

    To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

    They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

    “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

    This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

    Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

    A powerful technique

    They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

    Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

    “For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

    Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

    Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

    Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

    Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

    This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy. More

  • in

    Tuning in to invisible waves on the JET tokamak

    Research scientist Alex Tinguely is readjusting to Cambridge and Boston.

    As a postdoc with the Plasma Science and Fusion Center (PSFC), the MIT graduate spent the last two years in Oxford, England, a city he recalls can be traversed entirely “in the time it takes to walk from MIT to Harvard.” With its ancient stone walls, cathedrals, cobblestone streets, and winding paths, that small city was his home base for a big project: JET, a tokamak that is currently the largest operating magnetic fusion energy experiment in the world.

    Located at the Culham Center for Fusion Energy (CCFE), part of the U.K. Atomic Energy Authority, this key research center of the European Fusion Program has recently announced historic success. Using a 50-50 deuterium-tritium fuel mixture for the first time since 1997, JET established a fusion power record of 10 megawatts output over five seconds. It produced 59 megajoules of fusion energy, more than doubling the 22 megajoule record it set in 1997. As a member of the JET Team, Tinguely has overseen the measurement and instrumentation systems (diagnostics) contributed by the MIT group.

    A lucky chance

    The postdoctoral opportunity arose just as Tinguely was graduating with a PhD in physics from MIT. Managed by Professor Miklos Porkolab as the principal investigator for over 20 years, this postdoctoral program has prepared multiple young researchers for careers in fusion facilities around the world. The collaborative research provided Tinguely the chance to work on a fusion device that would be adding tritium to the usual deuterium fuel.

    Fusion, the process that fuels the sun and other stars, could provide a long-term source of carbon-free power on Earth, if it can be harnessed. For decades researchers have tried to create an artificial star in a doughnut-shaped bottle, or “tokamak,” using magnetic fields to keep the turbulent plasma fuel confined and away from the walls of its container long enough for fusion to occur.

    In his graduate student days at MIT, Tinguely worked on the PSFC’s Alcator C-Mod tokamak, now decommissioned, which, like most magnetic fusion devices, used deuterium to create the plasmas for experiments. JET, since beginning operation in 1983, has done the same, later joining a small number of facilities that added tritium, a radioactive isotope of hydrogen. While this addition increases the amount of fusion, it also creates much more radiation and activation.

    Tinguely considers himself fortunate to have been placed at JET.

    “There aren’t that many operating tokamaks in the U.S. right now,” says Tinguely, “not to mention one that would be running deuterium-tritium (DT), which hasn’t been run for over 20 years, and which would be making some really important measurements. I got a very lucky spot where I was an MIT postdoc, but I lived in Oxford, working on a very international project.”

    Strumming magnetic field lines

    The measurements that interest Tinguely are of low-frequency electromagnetic waves in tokamak plasmas. Tinguely uses an antenna diagnostic developed by MIT, EPFL Swiss Plasma Center, and CCFE to probe the so-called Alfvén eigenmodes when they are stable, before the energetic alpha particles produced by DT fusion plasmas can drive them toward instability.

    What makes MIT’s “Alfvén Eigenmode Active Diagnostic” essential is that without it researchers cannot see, or measure, stable eigenmodes. Unstable modes show up clearly as magnetic fluctuations in the data, but stable waves are invisible without prompting from the antenna. These measurements help researchers understand the physics of Alfvén waves and their potential for degrading fusion performance, providing insights that will be increasingly important for future DT fusion devices.

    Tinguely likens the diagnostic to fingers on guitar strings.

    “The magnetic field lines in the tokamak are like guitar strings. If you have nothing to give energy to the strings — or give energy to the waves of the magnetic field lines — they just sit there, they don’t do anything. The energetic plasma particles can essentially ‘play the guitar strings,’ strum the magnetic field lines of the plasma, and that’s when you can see the waves in your plasma. But if the energetic particle drive of the waves is not strong enough you won’t see them, so you need to come along and ‘pluck the strings’ with our antenna. And that’s how you learn some information about the waves.”

    Much of Tinguely’s experience on JET took place during the Covid-19 pandemic, when off-site operation and analysis were the norm. However, because the MIT diagnostic needed to be physically turned on and off, someone from Tinguely’s team needed to be on site twice a day, a routine that became even less convenient when tritium was introduced.

    “When you have deuterium and tritium, you produce a lot of neutrons. So, some of the buildings became off-limits during operation, which meant they had to be turned on really early in the morning, like 6:30 a.m., and then turned off very late at night, around 10:30 p.m.”

    Looking to the future

    Now a research scientist at the PSFC, Tinguely continues to work at JET remotely. He sometimes wishes he could again ride that train from Oxford to Culham — which he fondly remembers for its clean, comfortable efficiency — to see work colleagues and to visit local friends. The life he created for himself in England included practice and performance with the 125-year-old Oxford Bach Choir, as well as weekly dinner service at The Gatehouse, a facility that offers free support for the local homeless and low-income communities.

    “Being back is exciting too,” he says. “It’s fun to see how things have changed, how people and projects have grown, what new opportunities have arrived.”

    He refers specifically to a project that is beginning to take up more of his time: SPARC, the tokamak the PSFC supports in collaboration with Commonwealth Fusion Systems. Designed to use deuterium-tritium to make net fusion gains, SPARC will be able to use the latest research on JET to advantage. Tinguely is already exploring how his expertise with Alfvén eigenmodes can support the experiment.

    “I actually had an opportunity to do my PhD — or DPhil as they would call it — at Oxford University, but I went to MIT for grad school instead,” Tinguely reveals. “So, this is almost like closure, in a sense. I got to have my Oxford experience in the end, just in a different way, and have the MIT experience too.”

    He adds, “And I see myself being here at MIT for some time.” More

  • in

    More sensitive X-ray imaging

    Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.

    Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.

    Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.

    The findings are described today in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.

    While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.

    To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter).

    “The key to what we’re doing is a general theory and framework we have developed,” Rivera says. This allows the researchers to calculate the scintillation levels that would be produced by any arbitrary configuration of nanophotonic structures. The scintillation process itself involves a series of steps, making it complicated to unravel. The framework the team developed involves integrating three different types of physics, Roques-Carmes says. Using this system they have found a good match between their predictions and the results of their subsequent experiments.

    The experiments showed a tenfold improvement in emission from the treated scintillator. “So, this is something that might translate into applications for medical imaging, which are optical photon-starved, meaning the conversion of X-rays to optical light limits the image quality. [In medical imaging,] you do not want to irradiate your patients with too much of the X-rays, especially for routine screening, and especially for young patients as well,” Roques-Carmes says.

    “We believe that this will open a new field of research in nanophotonics,” he adds. “You can use a lot of the existing work and research that has been done in the field of nanophotonics to improve significantly on existing materials that scintillate.”

    “The research presented in this paper is hugely significant,” says Rajiv Gupta, chief of neuroradiology at Massachusetts General Hospital and an associate professor at Harvard Medical School, who was not associated with this work. “Nearly all detectors used in the $100 billion [medical X-ray] industry are indirect detectors,” which is the type of detector the new findings apply to, he says. “Everything that I use in my clinical practice today is based on this principle. This paper improves the efficiency of this process by 10 times. If this claim is even partially true, say the improvement is two times instead of 10 times, it would be transformative for the field!”

    Soljacic says that while their experiments proved a tenfold improvement in emission could be achieved in particular systems, by further fine-tuning the design of the nanoscale patterning, “we also show that you can get up to 100 times [improvement] in certain scintillator systems, and we believe we also have a path toward making it even better,” he says.

    Soljacic points out that in other areas of nanophotonics, a field that deals with how light interacts with materials that are structured at the nanometer scale, the development of computational simulations has enabled rapid, substantial improvements, for example in the development of solar cells and LEDs. The new models this team developed for scintillating materials could facilitate similar leaps in this technology, he says.

    Nanophotonics techniques “give you the ultimate power of tailoring and enhancing the behavior of light,” Soljacic says. “But until now, this promise, this ability to do this with scintillation was unreachable because modeling the scintillation was very challenging. Now, this work for the first time opens up this field of scintillation, fully opens it, for the application of nanophotonics techniques.” More generally, the team believes that the combination of nanophotonic and scintillators might ultimately enable higher resolution, reduced X-ray dose, and energy-resolved X-ray imaging.

    This work is “very original and excellent,” says Eli Yablonovitch, a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, who was not associated with this research. “New scintillator concepts are very important in medical imaging and in basic research.”

    Yablonovitch adds that while the concept still needs to be proven in a practical device, he says that, “After years of research on photonic crystals in optical communication and other fields, it’s long overdue that photonic crystals should be applied to scintillators, which are of great practical importance yet have been overlooked” until this work.

    The research team included Ali Ghorashi, Steven Kooi, Yi Yang, Zin Lin, Justin Beroz, Aviram Massuda, Jamison Sloan, and Nicolas Romeo at MIT; Yang Yu at Raith America, Inc.; and Ido Kaminer at Technion in Israel. The work was supported, in part, by the U.S. Army Research Office and the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, by the Air Force Office of Scientific Research, and by a Mathworks Engineering Fellowship. More

  • in

    Solar-powered system offers a route to inexpensive desalination

    An estimated two-thirds of humanity is affected by shortages of water, and many such areas in the developing world also face a lack of dependable electricity. Widespread research efforts have thus focused on ways to desalinate seawater or brackish water using just solar heat. Many such efforts have run into problems with fouling of equipment caused by salt buildup, however, which often adds complexity and expense.

    Now, a team of researchers at MIT and in China has come up with a solution to the problem of salt accumulation — and in the process developed a desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could also be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

    The findings are described today in the journal Nature Communications, in a paper by MIT graduate student Lenan Zhang, postdoc Xiangyu Li, professor of mechanical engineering Evelyn Wang, and four others.

    “There have been a lot of demonstrations of really high-performing, salt-rejecting, solar-based evaporation designs of various devices,” Wang says. “The challenge has been the salt fouling issue, that people haven’t really addressed. So, we see these very attractive performance numbers, but they’re often limited because of longevity. Over time, things will foul.”

    Many attempts at solar desalination systems rely on some kind of wick to draw the saline water through the device, but these wicks are vulnerable to salt accumulation and relatively difficult to clean. The team focused on developing a wick-free system instead. The result is a layered system, with dark material at the top to absorb the sun’s heat, then a thin layer of water above a perforated layer of material, sitting atop a deep reservoir of the salty water such as a tank or a pond. After careful calculations and experiments, the researchers determined the optimal size for the holes drilled through the perforated material, which in their tests was made of polyurethane. At 2.5 millimeters across, these holes can be easily made using commonly available waterjets.

    The holes are large enough to allow for a natural convective circulation between the warmer upper layer of water and the colder reservoir below. That circulation naturally draws the salt from the thin layer above down into the much larger body of water below, where it becomes well-diluted and no longer a problem. “It allows us to achieve high performance and yet also prevent this salt accumulation,” says Wang, who is the Ford Professor of Engineering and head of the Department of Mechanical Engineering.

    Li says that the advantages of this system are “both the high performance and the reliable operation, especially under extreme conditions, where we can actually work with near-saturation saline water. And that means it’s also very useful for wastewater treatment.”

    He adds that much work on such solar-powered desalination has focused on novel materials. “But in our case, we use really low-cost, almost household materials.” The key was analyzing and understanding the convective flow that drives this entirely passive system, he says. “People say you always need new materials, expensive ones, or complicated structures or wicking structures to do that. And this is, I believe, the first one that does this without wicking structures.”

    This new approach “provides a promising and efficient path for desalination of high salinity solutions, and could be a game changer in solar water desalination,” says Hadi Ghasemi, a professor of chemical and biomolecular engineering at the University of Houston, who was not associated with this work. “Further work is required for assessment of this concept in large settings and in long runs,” he adds.

    Just as hot air rises and cold air falls, Zhang explains, natural convection drives the desalination process in this device. In the confined water layer near the top, “the evaporation happens at the very top interface. Because of the salt, the density of water at the very top interface is higher, and the bottom water has lower density. So, this is an original driving force for this natural convection because the higher density at the top drives the salty liquid to go down.” The water evaporated from the top of the system can then be collected on a condensing surface, providing pure fresh water.

    The rejection of salt to the water below could also cause heat to be lost in the process, so preventing that required careful engineering, including making the perforated layer out of highly insulating material to keep the heat concentrated above. The solar heating at the top is accomplished through a simple layer of black paint.

    This gif shows fluid flow visualized by food dye. The left-side shows the slow transport of colored de-ionized water from the top to the bottom bulk water. The right-side shows the fast transport of colored saline water from the top to the bottom bulk water driven by the natural convection effect.

    So far, the team has proven the concept using small benchtop devices, so the next step will be starting to scale up to devices that could have practical applications. Based on their calculations, a system with just 1 square meter (about a square yard) of collecting area should be sufficient to provide a family’s daily needs for drinking water, they say. Zhang says they calculated that the necessary materials for a 1-square-meter device would cost only about $4.

    Their test apparatus operated for a week with no signs of any salt accumulation, Li says. And the device is remarkably stable. “Even if we apply some extreme perturbation, like waves on the seawater or the lake,” where such a device could be installed as a floating platform, “it can return to its original equilibrium position very fast,” he says.

    The necessary work to translate this lab-scale proof of concept into workable commercial devices, and to improve the overall water production rate, should be possible within a few years, Zhang says. The first applications are likely to be providing safe water in remote off-grid locations, or for disaster relief after hurricanes, earthquakes, or other disruptions of normal water supplies.

    Zhang adds that “if we can concentrate the sunlight a little bit, we could use this passive device to generate high-temperature steam to do medical sterilization” for off-grid rural areas.

    “I think a real opportunity is the developing world,” Wang says. “I think that is where there’s most probable impact near-term, because of the simplicity of the design.” But, she adds, “if we really want to get it out there, we also need to work with the end users, to really be able to adopt the way we design it so that they’re willing to use it.”

    “This is a new strategy toward solving the salt accumulation problem in solar evaporation,” says Peng Wang, a professor at King Abdullah University of Science and Technology in Saudi Arabia, who was not associated with this research. “This elegant design will inspire new innovations in the design of advanced solar evaporators. The strategy is very promising due to its high energy efficiency, operation durability, and low cost, which contributes to low-cost and passive water desalination to produce fresh water from various source water with high salinity, e.g., seawater, brine, or brackish groundwater.”

    The team also included Yang Zhong, Arny Leroy, and Lin Zhao at MIT, and Zhenyuan Xu at Shanghai Jiao Tong University in China. The work was supported by the Singapore-MIT Alliance for Research and Technology, the U.S.-Egypt Science and Technology Joint Fund, and used facilities supported by the National Science Foundation. More

  • in

    First-ever Climate Grand Challenges recognizes 27 finalists

    All-carbon buildings, climate-resilient crops, and new tools to improve the prediction of extreme weather events are just a few of the 27 bold, interdisciplinary research projects selected as finalists from a field of almost 100 proposals in the first MIT Climate Grand Challenges competition. Each of the finalist teams received $100,000 to develop a comprehensive research and innovation plan.

    A subset of the finalists will make up a portfolio of multiyear projects that will receive additional funding and other support to develop high-impact, science-based mitigation and adaptation solutions on an accelerated basis. These flagship projects, which will be announced later this spring, will augment the work of the many MIT units already pursuing climate-related research activities.

    “Climate change poses a suite of challenges of immense urgency, complexity and scale. At MIT, we are bringing our particular strengths to bear through our community — a rare concentration of ingenuity and determination, rooted in a vibrant innovation ecosystem,” President L. Rafael Reif says. “Through MIT’s Climate Grand Challenges, we are engaging hundreds of our brilliant faculty and researchers in the search for solutions with enormous potential for impact.”

    The Climate Grand Challenges launched in July 2020 with the goal of mobilizing the entire MIT research community around developing solutions to some of the most complex unsolved problems in emissions reduction, climate change adaptation and resilience, risk forecasting, carbon removal, and understanding the human impacts of climate change.

    An event in April will showcase the flagship projects, bringing together public and private sector partners with the MIT teams to begin assembling the necessary resources for developing, implementing, and scaling these solutions rapidly.

    A whole-of-MIT effort

    Part of a wide array of major climate programs outlined last year in “Fast Forward: MIT’s Climate Action Plan for the Decade,” the Climate Grand Challenges focuses on problems where progress depends on the application of forefront knowledge in the physical, life, and social sciences and the advancement of cutting-edge technologies.

    “We don’t have the luxury of time in responding to the intensifying climate crisis,” says Vice President for Research Maria Zuber, who oversees the implementation of MIT’s climate action plan. “The Climate Grand Challenges are about marshaling the wide and deep knowledge and methods of the MIT community around transformative research that can help accelerate our collective response to climate change.”

    If successful, the solutions will have tangible effects, changing the way people live and work. Examples of these new approaches range from developing cost-competitive long-term energy-storage systems to using drone technologies and artificial intelligence to study the role of the deep ocean in the climate crisis. Many projects also aim to increase the humanistic understanding of these phenomena, recognizing that technological advances alone will not address the widespread impacts of climate change, and a comparable behavioral and cultural shift is needed to stave off future threats.

    “To achieve net-zero emissions later this century we must deploy the tools and technologies we already have,” says Richard Lester, associate provost for international activities. “But we’re still far from having everything needed to get there in ways that are equitable and affordable. Nor do we have the solutions in hand that will allow communities — especially the most vulnerable ones — to adapt to the disruptions that will occur even if the world does get to net-zero. Climate Grand Challenges is creating a new opportunity for the MIT research community to attack some of these hard, unsolved problems, and to engage with partners in industry, government, and the nonprofit sector to accelerate the whole cycle of activities needed to implement solutions at scale.” 

    Selecting the finalist projects

    A 24-person faculty committee convened by Lester and Zuber with members from all five of MIT’s schools and the MIT Schwarzman College of Computing led the planning and initial call for ideas. A smaller group of committee members was charged with evaluating nearly 100 letters of interest, representing 90 percent of MIT departments and ​​involving almost 400 MIT faculty members and senior researchers as well as colleagues from other research institutions.

    “Effectively confronting the climate emergency requires risk taking and sustained investment over a period of many decades,” says Anantha Chandrakasan, dean of the School of Engineering. “We have a responsibility to use our incredible resources and expertise to tackle some of the most challenging problems in climate mitigation and adaptation, and the opportunity to make major advances globally.”

    Lester and Zuber charged a second faculty committee with organizing a rigorous and thorough evaluation of the plans developed by the 27 finalist teams. Drawing on an extensive review process involving international panels of prominent experts, MIT will announce a small group of flagship Grand Challenge projects in April. 

    Each of the 27 finalist teams is addressing one of four broad Grand Challenge problems:

    Building equity and fairness into climate solutions

    Policy innovation and experimentation for effective and equitable climate solutions, led by Abhijit Banerjee, Iqbal Dhaliwal, and Claire Walsh
    Protecting and enhancing natural carbon sinks – Natural Climate and Community Solutions (NCCS), led by John Fernandez, Daniela Rus, and Joann de Zegher
    Reducing group-based disparities in climate adaptation, led by Evan Lieberman, Danielle Wood, and Siqi Zheng
    Reinventing climate change adaptation – The Climate Resilience Early Warning System (CREWSnet), led by John Aldridge and Elfatih Eltahir
    The Deep Listening Project: Communication infrastructure for collaborative adaptation, led by Eric Gordon, Yihyun Lim, and James Paradis
    The Equitable Resilience Framework, led by Janelle Knox-Hayes

    Decarbonizing complex industries and processes

    Carbon >Building, led by Mark Goulthorpe
    Center for Electrification and Decarbonization of Industry, led by Yet-Ming Chiang and Bilge Yildiz
    Decarbonizing and strengthening the global energy infrastructure using nuclear batteries, led by Jacopo Buongiorno
    Emissions reduction through innovation in the textile industry, led by Yuly Fuentes-Medel and Greg Rutledge
    Rapid decarbonization of freight mobility, led by Yossi Sheffi and Matthias Winkenbach
    Revolutionizing agriculture with low-emissions, resilient crops, led by Christopher Voigt
    Solar fuels as a vector for climate change mitigation, led by Yuriy Román-Leshkov and Yogesh Surendranath
    The MIT Low-Carbon Co-Design Institute, led by Audun Botterud, Dharik Mallapragada, and Robert Stoner
    Tough to Decarbonize Transportation, led by Steven Barrett and William Green

    Removing, managing, and storing greenhouse gases

    Demonstrating safe, globally distributed geological CO2 storage at scale, led by Bradford Hager, Howard Herzog, and Ruben Juanes
    Deploying versatile carbon capture technologies and storage at scale, led by Betar Gallant, Bradford Hager, and T. Alan Hatton
    Directed Evolution of Biological Carbon Fixation Working Group at MIT (DEBC-MIT), led by Edward Boyden and Matthew Shoulders
    Managing sources and sinks of carbon in terrestrial and coastal ecosystems, led by Charles Harvey, Tami Lieberman, and Heidi Nepf
    Strategies to Reduce Atmospheric Methane, led by Desiree Plata

    The Advanced Carbon Mineralization Initiative, led by Edward Boyden, Matěj Peč, and Yogesh Surendranath

    Using data and science to forecast climate-related risk

    Bringing computation to the climate challenge, led by Noelle Eckley Selin and Raffaele Ferrari
    Ocean vital signs, led by Christopher Hill and Ryan Woosley
    Preparing for a new world of weather and climate extremes, led by Kerry Emanuel, Miho Mazereeuw, and Paul O’Gorman
    Quantifying and managing the risks of sea-level rise, led by Brent Minchew
    Stratospheric Airborne Climate Observatory System to initiate a climate risk forecasting revolution, led by R. John Hansman and Brent Minchew
    The future of coasts – Changing flood risk for coastal communities in the developing world, led by Dara Entekhabi, Miho Mazereeuw, and Danielle Wood

    To learn more about the MIT Climate Grand Challenges, visit climategrandchallenges.mit.edu. More

  • in

    Students dive into research with the MIT Climate and Sustainability Consortium

    Throughout the fall 2021 semester, the MIT Climate and Sustainability Consortium (MCSC) supported several research projects with a climate-and-sustainability topic related to the consortium, through the MIT Undergraduate Research Opportunities Program (UROP). These students, who represent a range of disciplines, had the opportunity to work with MCSC Impact Fellows on topics related directly to the ongoing work and collaborations with MCSC member companies and the broader MIT community, from carbon capture to value-chain resilience to biodegradables. Many of these students are continuing their work this spring semester.

    Hannah Spilman, who is studying chemical engineering, worked with postdoc Glen Junor, an MCSC Impact Fellow, to investigate carbon capture, utilization, and storage (CCUS), with the goal of facilitating CCUS on a gigaton scale, a much larger capacity than what currently exists. “Scientists agree CCUS will be an important tool in combating climate change, but the largest CCUS facility only captures CO2 on a megaton scale, and very few facilities are actually operating,” explains Spilman. 

    Throughout her UROP, she worked on analyzing the currently deployed technology in the CCUS field, using National Carbon Capture Center post-combustion project reports to synthesize the results and outline those technologies. Examining projects like the RTI-NAS experiment, which showcased innovation with carbon capture technology, was especially helpful. “We must first understand where we are, and as we continue to conduct analyses, we will be able to understand the field’s current state and path forward,” she concludes.

    Fellow chemical engineering students Claire Kim and Alfonso Restrepo are working with postdoc and MCSC Impact Fellow Xiangkun (Elvis) Cao, also on investigating CCUS technology. Kim’s focus is on life cycle assessment (LCA), while Restrepo’s focus is on techno-economic assessment (TEA). They have been working together to use the two tools to evaluate multiple CCUS technologies. While LCA and TEA are not new tools themselves, their application in CCUS has not been comprehensively defined and described. “CCUS can play an important role in the flexible, low-carbon energy systems,” says Kim, which was part of the motivation behind her project choice.

    Through TEA, Restrepo has been investigating how various startups and larger companies are incorporating CCUS technology in their processes. “In order to reduce CO2 emissions before it’s too late to act, there is a strong need for resources that effectively evaluate CCUS technology, to understand the effectiveness and viability of emerging technology for future implementation,” he explains. For their next steps, Kim and Restrepo will apply LCA and TEA to the analysis of a specific capture (for example, direct ocean capture) or conversion (for example, CO2-to-fuel conversion) process​ in CCUS.

    Cameron Dougal, a first-year student, and James Santoro, studying management, both worked with postdoc and MCSC Impact Fellow Paloma Gonzalez-Rojas on biodegradable materials. Dougal explored biodegradable packaging film in urban systems. “I have had a longstanding interest in sustainability, with a newer interest in urban planning and design, which motivated me to work on this project,” Dougal says. “Bio-based plastics are a promising step for the future.”

    Dougal spent time conducting internet and print research, as well as speaking with faculty on their relevant work. From these efforts, Dougal has identified important historical context for the current recycling landscape — as well as key case studies and cities around the world to explore further. In addition to conducting more research, Dougal plans to create a summary and statistic sheet.

    Santoro dove into the production angle, working on evaluating the economic viability of the startups that are creating biodegradable materials. “Non-renewable plastics (created with fossil fuels) continue to pollute and irreparably damage our environment,” he says. “As we look for innovative solutions, a key question to answer is how can we determine a more effective way to evaluate the economic viability and probability of success for new startups and technologies creating biodegradable plastics?” The project aims to develop an effective framework to begin to answer this.

    At this point, Santoro has been understanding the overall ecosystem, understanding how these biodegradable materials are developed, and analyzing the economics side of things. He plans to have conversations with company founders, investors, and experts, and identify major challenges for biodegradable technology startups in creating high performance products with attractive unit economics. There is also still a lot to research about new technologies and trends in the industry, the profitability of different products, as well as specific individual companies doing this type of work.

    Tess Buchanan, who is studying materials science and engineering, is working with Katharina Fransen and Sarah Av-Ron, MIT graduate students in the Department of Chemical Engineering, and principal investigator Professor Bradley Olsen, to also explore biodegradables by looking into their development from biomass “This is critical work, given the current plastics sustainability crisis, and the potential of bio-based polymers,” Buchanan says.

    The objective of the project is to explore new sustainable polymers through a biodegradation assay using clear zone growth analysis to yield degradation rates. For next steps, Buchanan is diving into synthesis expansion and using machine learning to understand the relationship between biodegradation and polymer chemistry.

    Kezia Hector, studying chemical engineering, and Tamsin Nottage, a first-year student, working with postdoc and MCSC Impact Fellow Sydney Sroka, explored advancing and establishing sustainable solutions for value chain resilience. Hector’s focus was understanding how wildfires can affect supply chains, specifically identifying sources of economic loss. She reviewed academic literature and news articles, and looked at the Amazon, California, Siberia, and Washington, finding that wildfires cause millions of dollars in damage every year and impact supply chains by cutting off or slowing down freight activity. She will continue to identify ways to make supply chains more resilient and sustainable.

    Nottage focused on the economic impact of typhoons, closely studying Typhoon Mangkhut, a powerful and catastrophic tropical cyclone that caused extensive damages of $593 million in Guam, the Philippines, and South China in September 2018. “As a Bahamian, I’ve witnessed the ferocity of hurricanes and challenges of rebuilding after them,” says Nottage. “I used this project to identify the tropical cyclones that caused the most extensive damage for further investigation.”She compiled the causes of damage and their costs to inform targets of supply chain resiliency reform (shipping, building materials, power supply, etc.). As a next step, Nottage will focus on modeling extreme events like Mangkunt to develop frameworks that companies can learn from and utilize to build more sustainable supply chains in the future.

    Ellie Vaserman, a first-year student working with postdoc and MCSC Impact Fellow Poushali Maji, also explored a topic related to value chains: unlocking circularity across the entire value chain through quality improvement, inclusive policy, and behavior to improve materials recovery. Specifically, her objectives have been to learn more about methods of chemolysis and the viability of their products, to compare methods of chemical recycling of polyethylene terephthalate (PET) using quantitative metrics, and to design qualitative visuals to make the steps in PET chemical recycling processes more understandable.

    To do so, she conducted a literature review to identify main methods of chemolysis that are utilized in the field (and collect data about these methods) and created graphics for some of the more common processes. Moving forward, she hopes to compare the processes using other metrics and research the energy intensity of the monomer purification processes.

    The work of these students, as well as many others, continued over MIT’s Independent Activities Period in January. More

  • in

    MIT Energy Initiative launches the Future Energy Systems Center

    The MIT Energy Initiative (MITEI) has launched a new research consortium — the Future Energy Systems Center — to address the climate crisis and the role energy systems can play in solving it. This integrated effort engages researchers from across all of MIT to help the global community reach its goal of net-zero carbon emissions. The center examines the accelerating energy transition and collaborates with industrial leaders to reform the world’s energy systems. The center is part of “Fast Forward: MIT’s Climate Action Plan for the Decade,” MIT’s multi-pronged effort announced last year to address the climate crisis.

    The Future Energy Systems Center investigates the emerging technology, policy, demographics, and economics reshaping the landscape of energy supply and demand. The center conducts integrative analysis of the entire energy system — a holistic approach essential to understanding the cross-sectorial impact of the energy transition.

    “We must act quickly to get to net-zero greenhouse gas emissions. At the same time, we have a billion people around the world with inadequate access, or no access, to electricity — and we need to deliver it to them,” says MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. “The Future Energy Systems Center combines MIT’s deep knowledge of energy science and technology with advanced tools for systems analysis to examine how advances in technology and system economics may respond to various policy scenarios.”  

    The overarching focus of the center is integrative analysis of the entire energy system, providing insights into the complex multi-sectorial transformations needed to alter the three major energy-consuming sectors of the economy — transportation, industry, and buildings — in conjunction with three major decarbonization-enabling technologies — electricity, energy storage and low-carbon fuels, and carbon management. “Deep decarbonization of our energy system requires an economy-wide perspective on the technology options, energy flows, materials flows, life-cycle emissions, costs, policies, and socioeconomics consequences,” says Randall Field, the center’s executive director. “A systems approach is essential in enabling cross-disciplinary teams to work collaboratively together to address the existential crisis of climate change.”

    Through techno-economic and systems-oriented research, the center analyzes these important interactions. For example:

    •  Increased reliance on variable renewable energy, such as wind and solar, and greater electrification of transportation, industry, and buildings will require expansion of demand management and other solutions for balancing of electricity supply and demand across these areas.

    •  Likewise, balancing supply and demand will require deploying grid-scale energy storage and converting the electricity to low-carbon fuels (hydrogen and liquid fuels), which can in turn play a vital role in the energy transition for hard-to-decarbonize segments of transportation, industry, and buildings.

    •  Carbon management (carbon dioxide capture from industry point sources and from air and oceans; utilization/conversion to valuable products; transport; storage) will also play a critical role in decarbonizing industry, electricity, and fuels — both as carbon-mitigation and negative-carbon solutions.

    As a member-supported research consortium, the center collaborates with industrial experts and leaders — from both energy’s consumer and supplier sides — to gain insights to help researchers anticipate challenges and opportunities of deploying technology at the scale needed to achieve decarbonization. “The Future Energy Systems Center gives us a powerful way to engage with industry to accelerate the energy transition,” says Armstrong. “Working together, we can better understand how our current technology toolbox can be more effectively put to use now to reduce emissions, and what new technologies and policies will ultimately be needed to reach net-zero.”

    A steering committee, made up of 11 MIT professors and led by Armstrong, selects projects to create a research program with high impact on decarbonization, while leveraging MIT strengths and addressing interests of center members in pragmatic and scalable solutions. “MIT — through our recently released climate action plan — is committed to moving with urgency and speed to help wring carbon dioxide emissions out the global economy to resolve the growing climate crisis,” says Armstrong. “We have no time to waste.”

    The center members to date are: AECI, Analog Devices, Chevron, ConocoPhillips, Copec, Dominion, Duke Energy, Enerjisa, Eneva, Eni, Equinor, Eversource, Exelon, ExxonMobil, Ferrovial, Iberdrola, IHI, National Grid, Raizen, Repsol, Rio Tinto, Shell, Tata Power, Toyota Research Institute, and Washington Gas. More

  • in

    Pricing carbon, valuing people

    In November, inflation hit a 39-year high in the United States. The consumer price index was up 6.8 percent from the previous year due to major increases in the cost of rent, food, motor vehicles, gasoline, and other common household expenses. While inflation impacts the entire country, its effects are not felt equally. At greatest risk are low- and middle-income Americans who may lack sufficient financial reserves to absorb such economic shocks.

    Meanwhile, scientists, economists, and activists across the political spectrum continue to advocate for another potential systemic economic change that many fear will also put lower-income Americans at risk: the imposition of a national carbon price, fee, or tax. Framed by proponents as the most efficient and cost-effective way to reduce greenhouse gas emissions and meet climate targets, a carbon penalty would incentivize producers and consumers to shift expenditures away from carbon-intensive products and services (e.g., coal or natural gas-generated electricity) and toward low-carbon alternatives (e.g., 100 percent renewable electricity). But if not implemented in a way that takes differences in household income into account, this policy strategy, like inflation, could place an unequal and untenable economic burden on low- and middle-income Americans.         

    To garner support from policymakers, carbon-penalty proponents have advocated for policies that recycle revenues from carbon penalties to all or lower-income taxpayers in the form of payroll tax reductions or lump-sum payments. And yet some of these proposed policies run the risk of reducing the overall efficiency of the U.S. economy, which would lower the nation’s GDP and impede its economic growth.

    Which begs the question: Is there a sweet spot at which a national carbon-penalty revenue-recycling policy can both avoid inflicting economic harm on lower-income Americans at the household level and degrading economic efficiency at the national level?

    In search of that sweet spot, researchers at the MIT Joint Program on the Science and Policy of Global Change assess the economic impacts of four different carbon-penalty revenue-recycling policies: direct rebates from revenues to households via lump-sum transfers; indirect refunding of revenues to households via a proportional reduction in payroll taxes; direct rebates from revenues to households, but only for low- and middle-income groups, with remaining revenues recycled via a proportional reduction in payroll taxes; and direct, higher rebates for poor households, with remaining revenues recycled via a proportional reduction in payroll taxes.

    To perform the assessment, the Joint Program researchers integrate a U.S. economic model (MIT U.S. Regional Energy Policy) with a dataset (Bureau of Labor Statistics’ Consumer Expenditure Survey) providing consumption patterns and other socioeconomic characteristics for 15,000 U.S. households. Using the combined model, they evaluate the distributional impacts and potential trade-offs between economic equity and efficiency of all four carbon-penalty revenue-recycling policies.

    The researchers find that household rebates have progressive impacts on consumers’ financial well-being, with the greatest benefits going to the lowest-income households, while policies centered on improving the efficiency of the economy (e.g., payroll tax reductions) have slightly regressive household-level financial impacts. In a nutshell, the trade-off is between rebates that provide more equity and less economic efficiency versus tax cuts that deliver the opposite result. The latter two policy options, which combine rebates to lower-income households with payroll tax reductions, result in an optimal blend of sufficiently progressive financial results at the household level and economy efficiency at the national level. Results of the study are published in the journal Energy Economics.

    “We have determined that only a portion of carbon-tax revenues is needed to compensate low-income households and thus reduce inequality, while the rest can be used to improve the economy by reducing payroll or other distortionary taxes,” says Xaquin García-Muros, lead author of the study, a postdoc at the MIT Joint Program who is affiliated with the Basque Centre for Climate Change in Spain. “Therefore, we can eliminate potential trade-offs between efficiency and equity, and promote a just and efficient energy transition.”

    “If climate policies increase the gap between rich and poor households or reduce the affordability of energy services, then these policies might be rejected by the public and, as a result, attempts to decarbonize the economy will be less efficient,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “Our findings provide guidance to decision-makers to advance more well-designed policies that deliver economic benefits to the nation as a whole.” 

    The study’s novel integration of a national economic model with household microdata creates a new and powerful platform to further investigate key differences among households that can help inform policies aimed at a just transition to a low-carbon economy. More