More stories

  • in

    Q&A: Randolph Kirchain on how cool pavements can mitigate climate change

    As cities search for climate change solutions, many have turned to one burgeoning technology: cool pavements. By reflecting a greater proportion of solar radiation, cool pavements can offer an array of climate change mitigation benefits, from direct radiative forcing to reduced building energy demand.

    Yet, scientists from the MIT Concrete Sustainability Hub (CSHub) have found that cool pavements are not just a summertime solution. Here, Randolph Kirchain, a principal research scientist at CSHub, discusses how implementing cool pavements can offer myriad greenhouse gas reductions in cities — some of which occur even in the winter.

    Q: What exactly are cool pavements? 

    A: There are two ways to make a cool pavement: changing the pavement formulation to make the pavement porous like a sponge (a so-called “pervious pavement”), or paving with reflective materials. The latter method has been applied extensively because it can be easily adopted on the current road network with different traffic volumes while sustaining — and sometimes improving — the road longevity. To the average observer, surface reflectivity usually corresponds to the color of a pavement — the lighter, the more reflective. 

    We can quantify this surface reflectivity through a measurement called albedo, which refers to the percentage of light a surface reflects. Typically, a reflective pavement has an albedo of 0.3 or higher, meaning that it reflects 30 percent of the light it receives.

    To attain this reflectivity, there are a number of techniques at our disposal. The most common approach is to simply paint a brighter coating atop existing pavements. But it’s also possible to pave with materials that possess naturally greater reflectivity, such as concrete or lighter-colored binders and aggregates.

    Q: How can cool pavements mitigate climate change?

    A: Cool pavements generate several, often unexpected, effects. The most widely known is a reduction in surface and local air temperatures. This occurs because cool pavements absorb less radiation and, consequently, emit less of that radiation as heat. In the summer, this means they can lower urban air temperatures by several degrees Fahrenheit.

    By changing air temperatures or reflecting light into adjacent structures, cool pavements can also alter the need for heating and cooling in those structures, which can change their energy demand and, therefore, mitigate the climate change impacts associated with building energy demand.

    However, depending on how dense the neighborhood is built, a proportion of the radiation cool pavements reflect doesn’t strike buildings; instead, it travels back into the atmosphere and out into space. This process, called a radiative forcing, shifts the Earth’s energy balance and effectively offsets some of the radiation trapped by greenhouse gases (GHGs).

    Perhaps the least-known impact of cool pavements is on vehicle fuel consumption. Certain cool pavements, namely concrete, possess a combination of structural properties and longevity that can minimize the excess fuel consumption of vehicles caused by road quality. Over the lifetime of a pavement, these fuel savings can add up — often offsetting the higher initial footprint of paving with more durable materials.

    Q: With these impacts in mind, how do the effects of cool pavements vary seasonally and by location?

    A: Many view cool pavements as a solution to summer heat. But research has shown that they can offer climate change benefits throughout the year.

    In high-volume traffic roads, the most prominent climate change benefit of cool pavements is not their reflectivity but their impact on vehicle fuel consumption. As such, cool pavement alternatives that minimize fuel consumption can continue to cut GHG emissions in winter, assuming traffic is constant.

    Even in winter, pavement reflectivity still contributes greatly to the climate change mitigation benefits of cool pavements. We found that roughly a third of the annual CO2-equivalent emissions reductions from the radiative forcing effects of cool pavements occurred in the fall and winter.

    It’s important to note, too, that the direction — not just the magnitude — of cool pavement impacts also vary seasonally. The most prominent seasonal variation is the changes to building energy demand. As they lower air temperatures, cool pavements can lessen the demand for cooling in buildings in the summer, while, conversely, they can cause buildings to consume more energy and generate more emissions due to heating in the winter.

    Interestingly, the radiation reflected by cool pavements can also strike adjacent buildings, heating them up. In the summer, this can increase building energy demand significantly, yet in the winter it can also warm structures and reduce their need for heating. In that sense, cool pavements can warm — as well as cool — their surroundings, depending on the building insolation [solar exposure] systems and neighborhood density.

    Q: How can cities manage these many impacts?

    A: As you can imagine, such different and often competing impacts can complicate the implementation of cool pavements. In some contexts, for instance, a cool pavement might even generate more emissions over its life than a conventional pavement — despite lowering air temperatures.

    To ensure that the lowest-emitting pavement is selected, then, cities should use a life-cycle perspective that considers all potential impacts. When they do, research has shown that they can reap sizeable benefits. The city of Phoenix, for instance, could see its projected emissions fall by as much as 6 percent, while Boston would experience a reduction of up to 3 percent.

    These benefits don’t just demonstrate the potential of cool pavements: they also reflect the outsized impact of pavements on our built environment and, moreover, our climate. As cities move to fight climate change, they should know that one of their most extensive assets also presents an opportunity for greater sustainability.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    Q&A: Climate Grand Challenges finalists on building equity and fairness into climate solutions

    Note: This is the first in a four-part interview series that will highlight the work of the Climate Grand Challenges finalists, ahead of the April announcement of several multiyear, flagship projects.

    The finalists in MIT’s first-ever Climate Grand Challenges competition each received $100,000 to develop bold, interdisciplinary research and innovation plans designed to attack some of the world’s most difficult and unresolved climate problems. The 27 teams are addressing four Grand Challenge problem areas: building equity and fairness into climate solutions; decarbonizing complex industries and processes; removing, managing, and storing greenhouse gases; and using data and science for improved climate risk forecasting.  

    In a conversation prepared for MIT News, faculty from three of the teams in the competition’s “Building equity and fairness into climate solutions” category share their thoughts on the need for inclusive solutions that prioritize disadvantaged and vulnerable populations, and discuss how they are working to accelerate their research to achieve the greatest impact. The following responses have been edited for length and clarity.

    The Equitable Resilience Framework

    Any effort to solve the most complex global climate problems must recognize the unequal burdens borne by different groups, communities, and societies — and should be equitable as well as effective. Janelle Knox-Hayes, associate professor in the Department of Urban Studies and Planning, leads a team that is developing processes and practices for equitable resilience, starting with a local pilot project in Boston over the next five years and extending to other cities and regions of the country. The Equitable Resilience Framework (ERF) is designed to create long-term economic, social, and environmental transformations by increasing the capacity of interconnected systems and communities to respond to a broad range of climate-related events. 

    Q: What is the problem you are trying to solve?

    A: Inequity is one of the severe impacts of climate change and resonates in both mitigation and adaptation efforts. It is important for climate strategies to address challenges of inequity and, if possible, to design strategies that enhance justice, equity, and inclusion, while also enhancing the efficacy of mitigation and adaptation efforts. Our framework offers a blueprint for how communities, cities, and regions can begin to undertake this work.

    Q: What are the most significant barriers that have impacted progress to date?

    A: There is considerable inertia in policymaking. Climate change requires a rethinking, not only of directives but pathways and techniques of policymaking. This is an obstacle and part of the reason our project was designed to scale up from local pilot projects. Another consideration is that the private sector can be more adaptive and nimble in its adoption of creative techniques. Working with the MIT Climate and Sustainability Consortium there may be ways in which we could modify the ERF to help companies address similar internal adaptation and resilience challenges.

    Protecting and enhancing natural carbon sinks

    Deforestation and forest degradation of strategic ecosystems in the Amazon, Central Africa, and Southeast Asia continue to reduce capacity to capture and store carbon through natural systems and threaten even the most aggressive decarbonization plans. John Fernandez, professor in the Department of Architecture and director of the Environmental Solutions Initiative, reflects on his work with Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and Artificial Intelligence Laboratory, and Joann de Zegher, assistant professor of Operations Management at MIT Sloan, to protect tropical forests by deploying a three-part solution that integrates targeted technology breakthroughs, deep community engagement, and innovative bioeconomic opportunities. 

    Q: Why is the problem you seek to address a “grand challenge”?

    A: We are trying to bring the latest technology to monitoring, assessing, and protecting tropical forests, as well as other carbon-rich and highly biodiverse ecosystems. This is a grand challenge because natural sinks around the world are threatening to release enormous quantities of stored carbon that could lead to runaway global warming. When combined with deep community engagement, particularly with indigenous and afro-descendant communities, this integrated approach promises to deliver substantially enhanced efficacy in conservation coupled to robust and sustainable local development.

    Q: What is known about this problem and what questions remain unanswered?

    A: Satellites, drones, and other technologies are acquiring more data about natural carbon sinks than ever before. The problem is well-described in certain locations such as the eastern Amazon, which has shifted from a net carbon sink to now a net positive carbon emitter. It is also well-known that indigenous peoples are the most effective stewards of the ecosystems that store the greatest amounts of carbon. One of the key questions that remains to be answered is determining the bioeconomy opportunities inherent within the natural wealth of tropical forests and other important ecosystems that are important to sustained protection and conservation.

    Reducing group-based disparities in climate adaptation

    Race, ethnicity, caste, religion, and nationality are often linked to vulnerability to the adverse effects of climate change, and if left unchecked, threaten to exacerbate long standing inequities. A team led by Evan Lieberman, professor of political science and director of the MIT Global Diversity Lab and MIT International Science and Technology Initiatives, Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics, and Siqi Zheng, professor of urban and real estate sustainability in the Center for Real Estate and the Department of Urban Studies and Planning, is seeking to  reduce ethnic and racial group-based disparities in the capacity of urban communities to adapt to the changing climate. Working with partners in nine coastal cities, they will measure the distribution of climate-related burdens and resiliency through satellites, a custom mobile app, and natural language processing of social media, to help design and test communication campaigns that provide accurate information about risks and remediation to impacted groups. 

    Q: How has this problem evolved?

    A: Group-based disparities continue to intensify within and across countries, owing in part to some randomness in the location of adverse climate events, as well as deep legacies of unequal human development. In turn, economically and politically privileged groups routinely hoard resources for adaptation. In a few cases — notably the United States, Brazil, and with respect to climate-related migrancy, in South Asia — there has been a great deal of research documenting the extent of such disparities. However, we lack common metrics, and for the most part, such disparities are only understood where key actors have politicized the underlying problems. In much of the world, relatively vulnerable and excluded groups may not even be fully aware of the nature of the challenges they face or the resources they require.

    Q: Who will benefit most from your research? 

    A: The greatest beneficiaries will be members of those vulnerable groups who lack the resources and infrastructure to withstand adverse climate shocks. We believe that it will be important to develop solutions such that relatively privileged groups do not perceive them as punitive or zero-sum, but rather as long-term solutions for collective benefit that are both sound and just. More

  • in

    Can the world meet global climate targets without coordinated global action?

    Like many of its predecessors, the 2021 United Nations Climate Change Conference (COP26) in Glasgow, Scotland concluded with bold promises on international climate action aimed at keeping global warming well below 2 degrees Celsius, but few concrete plans to ensure that those promises will be kept. While it’s not too late for the Paris Agreement’s nearly 200 signatory nations to take concerted action to cap global warming at 2 C — if not 1.5 C — there is simply no guarantee that they will do so. If they fail, how much warming is the Earth likely to see in the 21st century and beyond?

    A new study by researchers at the MIT Joint Program on the Science and Policy of Global Change and the Shell Scenarios Team projects that without a globally coordinated mitigation effort to reduce greenhouse gas emissions, the planet’s average surface temperature will reach 2.8 C, much higher than the “well below 2 C” level to which the Paris Agreement aspires, but a lot lower than what many widely used “business-as-usual” scenarios project.  

    Recognizing the limitations of such scenarios, which generally assume that historical trends in energy technology choices and climate policy inaction will persist for decades to come, the researchers have designed a “Growing Pressures” scenario that accounts for mounting social, technological, business, and political pressures that are driving a transition away from fossil-fuel use and toward a low-carbon future. Such pressures have already begun to expand low-carbon technology and policy options, which, in turn, have escalated demand to utilize those options — a trend that’s expected to self-reinforce. Under this scenario, an array of future actions and policies cause renewable energy and energy storage costs to decline; fossil fuels to be phased out; electrification to proliferate; and emissions from agriculture and industry to be sharply reduced.

    Incorporating these growing pressures in the MIT Joint Program’s integrated model of Earth and human systems, the study’s co-authors project future energy use, greenhouse gas emissions, and global average surface temperatures in a world that fails to implement coordinated, global climate mitigation policies, and instead pursues piecemeal actions at mostly local and national levels.

    “Few, if any, previous studies explore scenarios of how piecemeal climate policies might plausibly unfold into the future and impact global temperature,” says MIT Joint Program research scientist Jennifer Morris, the study’s lead author. “We offer such a scenario, considering a future in which the increasingly visible impacts of climate change drive growing pressure from voters, shareholders, consumers, and investors, which in turn drives piecemeal action by governments and businesses that steer investments away from fossil fuels and toward low-carbon alternatives.”

    In the study’s central case (representing the mid-range climate response to greenhouse gas emissions), fossil fuels persist in the global energy mix through 2060 and then slowly decline toward zero by 2130; global carbon dioxide emissions reach near-zero levels by 2130 (total greenhouse gas emissions decline to near-zero by 2150); and global surface temperatures stabilize at 2.8 C by 2150, 2.5 C lower than a widely used “business-as-usual” projection. The results appear in the journal Environmental Economics and Policy Studies.

    Such a transition could bring the global energy system to near-zero emissions, but more aggressive climate action would be needed to keep global temperatures well below 2 C in alignment with the Paris Agreement.

    “While we fully support the need to decarbonize as fast as possible, it is critical to assess realistic alternative scenarios of world development,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “We investigate plausible actions that could bring society closer to the long-term goals of the Paris Agreement. To actually meet those goals will require an accelerated transition away from fossil energy through a combination of R&D, technology deployment, infrastructure development, policy incentives, and business practices.”

    The study was funded by government, foundation, and industrial sponsors of the MIT Joint Program, including Shell International Ltd. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

    Probing probabilities

    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

    Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

    To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

    They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

    “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

    This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

    Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

    A powerful technique

    They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

    Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

    “For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

    Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

    Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

    Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

    Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

    This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy. More

  • in

    Tuning in to invisible waves on the JET tokamak

    Research scientist Alex Tinguely is readjusting to Cambridge and Boston.

    As a postdoc with the Plasma Science and Fusion Center (PSFC), the MIT graduate spent the last two years in Oxford, England, a city he recalls can be traversed entirely “in the time it takes to walk from MIT to Harvard.” With its ancient stone walls, cathedrals, cobblestone streets, and winding paths, that small city was his home base for a big project: JET, a tokamak that is currently the largest operating magnetic fusion energy experiment in the world.

    Located at the Culham Center for Fusion Energy (CCFE), part of the U.K. Atomic Energy Authority, this key research center of the European Fusion Program has recently announced historic success. Using a 50-50 deuterium-tritium fuel mixture for the first time since 1997, JET established a fusion power record of 10 megawatts output over five seconds. It produced 59 megajoules of fusion energy, more than doubling the 22 megajoule record it set in 1997. As a member of the JET Team, Tinguely has overseen the measurement and instrumentation systems (diagnostics) contributed by the MIT group.

    A lucky chance

    The postdoctoral opportunity arose just as Tinguely was graduating with a PhD in physics from MIT. Managed by Professor Miklos Porkolab as the principal investigator for over 20 years, this postdoctoral program has prepared multiple young researchers for careers in fusion facilities around the world. The collaborative research provided Tinguely the chance to work on a fusion device that would be adding tritium to the usual deuterium fuel.

    Fusion, the process that fuels the sun and other stars, could provide a long-term source of carbon-free power on Earth, if it can be harnessed. For decades researchers have tried to create an artificial star in a doughnut-shaped bottle, or “tokamak,” using magnetic fields to keep the turbulent plasma fuel confined and away from the walls of its container long enough for fusion to occur.

    In his graduate student days at MIT, Tinguely worked on the PSFC’s Alcator C-Mod tokamak, now decommissioned, which, like most magnetic fusion devices, used deuterium to create the plasmas for experiments. JET, since beginning operation in 1983, has done the same, later joining a small number of facilities that added tritium, a radioactive isotope of hydrogen. While this addition increases the amount of fusion, it also creates much more radiation and activation.

    Tinguely considers himself fortunate to have been placed at JET.

    “There aren’t that many operating tokamaks in the U.S. right now,” says Tinguely, “not to mention one that would be running deuterium-tritium (DT), which hasn’t been run for over 20 years, and which would be making some really important measurements. I got a very lucky spot where I was an MIT postdoc, but I lived in Oxford, working on a very international project.”

    Strumming magnetic field lines

    The measurements that interest Tinguely are of low-frequency electromagnetic waves in tokamak plasmas. Tinguely uses an antenna diagnostic developed by MIT, EPFL Swiss Plasma Center, and CCFE to probe the so-called Alfvén eigenmodes when they are stable, before the energetic alpha particles produced by DT fusion plasmas can drive them toward instability.

    What makes MIT’s “Alfvén Eigenmode Active Diagnostic” essential is that without it researchers cannot see, or measure, stable eigenmodes. Unstable modes show up clearly as magnetic fluctuations in the data, but stable waves are invisible without prompting from the antenna. These measurements help researchers understand the physics of Alfvén waves and their potential for degrading fusion performance, providing insights that will be increasingly important for future DT fusion devices.

    Tinguely likens the diagnostic to fingers on guitar strings.

    “The magnetic field lines in the tokamak are like guitar strings. If you have nothing to give energy to the strings — or give energy to the waves of the magnetic field lines — they just sit there, they don’t do anything. The energetic plasma particles can essentially ‘play the guitar strings,’ strum the magnetic field lines of the plasma, and that’s when you can see the waves in your plasma. But if the energetic particle drive of the waves is not strong enough you won’t see them, so you need to come along and ‘pluck the strings’ with our antenna. And that’s how you learn some information about the waves.”

    Much of Tinguely’s experience on JET took place during the Covid-19 pandemic, when off-site operation and analysis were the norm. However, because the MIT diagnostic needed to be physically turned on and off, someone from Tinguely’s team needed to be on site twice a day, a routine that became even less convenient when tritium was introduced.

    “When you have deuterium and tritium, you produce a lot of neutrons. So, some of the buildings became off-limits during operation, which meant they had to be turned on really early in the morning, like 6:30 a.m., and then turned off very late at night, around 10:30 p.m.”

    Looking to the future

    Now a research scientist at the PSFC, Tinguely continues to work at JET remotely. He sometimes wishes he could again ride that train from Oxford to Culham — which he fondly remembers for its clean, comfortable efficiency — to see work colleagues and to visit local friends. The life he created for himself in England included practice and performance with the 125-year-old Oxford Bach Choir, as well as weekly dinner service at The Gatehouse, a facility that offers free support for the local homeless and low-income communities.

    “Being back is exciting too,” he says. “It’s fun to see how things have changed, how people and projects have grown, what new opportunities have arrived.”

    He refers specifically to a project that is beginning to take up more of his time: SPARC, the tokamak the PSFC supports in collaboration with Commonwealth Fusion Systems. Designed to use deuterium-tritium to make net fusion gains, SPARC will be able to use the latest research on JET to advantage. Tinguely is already exploring how his expertise with Alfvén eigenmodes can support the experiment.

    “I actually had an opportunity to do my PhD — or DPhil as they would call it — at Oxford University, but I went to MIT for grad school instead,” Tinguely reveals. “So, this is almost like closure, in a sense. I got to have my Oxford experience in the end, just in a different way, and have the MIT experience too.”

    He adds, “And I see myself being here at MIT for some time.” More

  • in

    More sensitive X-ray imaging

    Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.

    Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.

    Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.

    The findings are described today in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.

    While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.

    To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter).

    “The key to what we’re doing is a general theory and framework we have developed,” Rivera says. This allows the researchers to calculate the scintillation levels that would be produced by any arbitrary configuration of nanophotonic structures. The scintillation process itself involves a series of steps, making it complicated to unravel. The framework the team developed involves integrating three different types of physics, Roques-Carmes says. Using this system they have found a good match between their predictions and the results of their subsequent experiments.

    The experiments showed a tenfold improvement in emission from the treated scintillator. “So, this is something that might translate into applications for medical imaging, which are optical photon-starved, meaning the conversion of X-rays to optical light limits the image quality. [In medical imaging,] you do not want to irradiate your patients with too much of the X-rays, especially for routine screening, and especially for young patients as well,” Roques-Carmes says.

    “We believe that this will open a new field of research in nanophotonics,” he adds. “You can use a lot of the existing work and research that has been done in the field of nanophotonics to improve significantly on existing materials that scintillate.”

    “The research presented in this paper is hugely significant,” says Rajiv Gupta, chief of neuroradiology at Massachusetts General Hospital and an associate professor at Harvard Medical School, who was not associated with this work. “Nearly all detectors used in the $100 billion [medical X-ray] industry are indirect detectors,” which is the type of detector the new findings apply to, he says. “Everything that I use in my clinical practice today is based on this principle. This paper improves the efficiency of this process by 10 times. If this claim is even partially true, say the improvement is two times instead of 10 times, it would be transformative for the field!”

    Soljacic says that while their experiments proved a tenfold improvement in emission could be achieved in particular systems, by further fine-tuning the design of the nanoscale patterning, “we also show that you can get up to 100 times [improvement] in certain scintillator systems, and we believe we also have a path toward making it even better,” he says.

    Soljacic points out that in other areas of nanophotonics, a field that deals with how light interacts with materials that are structured at the nanometer scale, the development of computational simulations has enabled rapid, substantial improvements, for example in the development of solar cells and LEDs. The new models this team developed for scintillating materials could facilitate similar leaps in this technology, he says.

    Nanophotonics techniques “give you the ultimate power of tailoring and enhancing the behavior of light,” Soljacic says. “But until now, this promise, this ability to do this with scintillation was unreachable because modeling the scintillation was very challenging. Now, this work for the first time opens up this field of scintillation, fully opens it, for the application of nanophotonics techniques.” More generally, the team believes that the combination of nanophotonic and scintillators might ultimately enable higher resolution, reduced X-ray dose, and energy-resolved X-ray imaging.

    This work is “very original and excellent,” says Eli Yablonovitch, a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, who was not associated with this research. “New scintillator concepts are very important in medical imaging and in basic research.”

    Yablonovitch adds that while the concept still needs to be proven in a practical device, he says that, “After years of research on photonic crystals in optical communication and other fields, it’s long overdue that photonic crystals should be applied to scintillators, which are of great practical importance yet have been overlooked” until this work.

    The research team included Ali Ghorashi, Steven Kooi, Yi Yang, Zin Lin, Justin Beroz, Aviram Massuda, Jamison Sloan, and Nicolas Romeo at MIT; Yang Yu at Raith America, Inc.; and Ido Kaminer at Technion in Israel. The work was supported, in part, by the U.S. Army Research Office and the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, by the Air Force Office of Scientific Research, and by a Mathworks Engineering Fellowship. More

  • in

    New power sources

    In the mid-1990s, a few energy activists in Massachusetts had a vision: What if citizens had choice about the energy they consumed? Instead of being force-fed electricity sources selected by a utility company, what if cities, towns, and groups of individuals could purchase power that was cleaner and cheaper?

    The small group of activists — including a journalist, the head of a small nonprofit, a local county official, and a legislative aide — drafted model legislation along these lines that reached the state Senate in 1995. The measure stalled out. In 1997, they tried again. Massachusetts legislators were busy passing a bill to reform the state power industry in other ways, and this time the activists got their low-profile policy idea included in it — as a provision so marginal it only got a brief mention in The Boston Globe’s coverage of the bill.

    Today, this idea, often known as Community Choice Aggregation (CCA), is used by roughly 36 million people in the U.S., or 11 percent of the population. Local residents, as a bloc, purchase energy with certain specifications attached, and over 1,800 communities have adopted CCA in six states, with others testing CCA pilot programs. From such modest beginnings, CCA has become a big deal.

    “It started small, then had a profound impact,” says David Hsu, an associate professor at MIT who studies energy policy issues. Indeed, the trajectory of CCA is so striking that Hsu has researched its origins, combing through a variety of archival sources and interviewing the principals. He has now written a journal article examining the lessons and implications of this episode.

    Hsu’s paper, “Straight out of Cape Cod: The origin of community choice aggregation and its spread to other states,” appears in advance online form in the journal Energy Research and Social Science, and in the April print edition of the publication.

    “I wanted to show people that a small idea could take off into something big,” Hsu says. “For me that’s a really hopeful democratic story, where people could do something without feeling they had to take on a whole giant system that wouldn’t immediately respond to only one person.”

    Local control

    Aggregating consumers to purchase energy was not a novelty in the 1990s. Companies within many industries have long joined forces to gain purchasing power for energy. And Rhode Island tried a form of CCA slightly earlier than Massachusetts did.

    However, it is the Massachusetts model that has been adopted widely: Cities or towns can require power purchases from, say, renewable sources, while individual citizens can opt out of those agreements. More state funding (for things like efficiency improvements) is redirected to cities and towns as well.

    In both ways, CCA policies provide more local control over energy delivery. They have been adopted in California, Illinois, New Jersey, New York, and Ohio. Meanwhile, Maryland, New Hampshire, and Virginia have recently passed similar legislation (also known as municipal or government aggregation, or community choice energy).

    For cities and towns, Hsu says, “Maybe you don’t own outright the whole energy system, but let’s take away one particular function of the utility, which is procurement.”

    That vision motivated a handful of Massachusetts activists and policy experts in the 1990s, including journalist Scott Ridley, who co-wrote a 1986 book, “Power Struggle,” with the University of Massachusetts historian Richard Rudolph and had spent years thinking about ways to reconfigure the energy system; Matt Patrick, chair of a local nonprofit focused on energy efficiency; Rob O’Leary, a local official in Barnstable County, on Cape Cod; and Paul Fenn, a staff aide to the state senator who chaired the legislature’s energy committee.

    “It started with these political activists,” Hsu says.

    Hsu’s research emphasizes several lessons to be learned from the fact the legislation first failed in 1995, before unexpectedly passing in 1997. Ridley remained an author and public figure; Patrick and O’Leary would each eventually be elected to the state legislature, but only after 2000; and Fenn had left his staff position by 1995 and worked with the group long-distance from California (where he became a long-term advocate about the issue). Thus, at the time CCA passed in 1997, none of its main advocates held an insider position in state politics. How did it succeed?

    Lessons of the legislation

    In the first place, Hsu believes, a legislative process resembles what the political theorist John Kingdon has called a “multiple streams framework,” in which “many elements of the policymaking process are separate, meandering, and uncertain.” Legislation isn’t entirely controlled by big donors or other interest groups, and “policy entrepreneurs” can find success in unpredictable windows of opportunity.

    “It’s the most true-to-life theory,” says Hsu.  

    Second, Hsu emphasizes, finding allies is crucial. In the case of CCA, that came about in a few ways. Many towns in Massachusetts have a town-level legislature known as Town Meeting; the activists got those bodies in about 20 towns to pass nonbinding resolutions in favor of community choice. O’Leary helped create a regional county commission in Barnstable County, while Patrick crafted an energy plan for it. High electricity rates were affecting all of Cape Cod at the time, so community choice also served as an economic benefit for Cape Cod’s working-class service-industry employees. The activists also found that adding an opt-out clause to the 1997 version appealed to legislators, who would support CCA if their constituents were not all bound to it.

    “You really have to stick with it, and you have to look for coalition partners,” Hsu says. “It’s fun to hear them [the activists] talk about going to Town Meetings, and how they tried to build grassroots support. If you look for allies, you can get things done. [I hope] the people can see [themselves] in other people’s activism even if they’re not exactly the same as you are.”

    By 1997, the CCA legislation had more geographic support, was understood as both an economic and environmental benefit for voters, and would not force membership upon anyone. The activists, while giving media interviews, and holding conferences, had found additional traction in the principle of citizen choice.

    “It’s interesting to me how the rhetoric of [citizen] choice and the rhetoric of democracy proves to be effective,” Hsu says. “Legislators feel like they have to give everyone some choice. And it expresses a collective desire for a choice that the utilities take away by being monopolies.”

    He adds: “We need to set out principles that shape systems, rather than just taking the system as a given and trying to justify principles that are 150 years old.”

    One last element in CCA passage was good timing. The governor and legislature in Massachusetts were already seeking a “grand bargain” to restructure electricity delivery and loosen the grip of utilities; the CCA fit in as part of this larger reform movement. Still, CCA adoption has been gradual; about one-third of Massachusetts towns with CCA have only adopted it within the last five years.

    CCA’s growth does not mean it’s invulnerable to repeal or utility-funded opposition efforts — “In California there’s been pretty intense pushback,” Hsu notes. Still, Hsu concludes, the fact that a handful of activists could start a national energy-policy movement is a useful reminder that everyone’s actions can make a difference.

    “It wasn’t like they went charging through a barricade, they just found a way around it,” Hsu says. “I want my students to know you can organize and rethink the future. It takes some commitment and work over a long time.” More