More stories

  • in

    Energy hackers give a glimpse of a postpandemic future

    After going virtual in 2020, the MIT EnergyHack was back on campus last weekend in a brand-new hybrid format that saw teams participate both in person and virtually from across the globe. While the hybrid format presented new challenges to the organizing team, it also allowed for one of the most diverse and inspiring iterations of the event to date.

    “Organizing a hybrid event was a challenging but important goal in 2021 as we slowly come out of the pandemic, but it was great to realize the benefits of the format this year,” says Kailin Graham, a graduate student in MIT’s Technology and Policy Program and one of the EnergyHack communications directors. “Not only were we able to get students back on campus and taking advantage of those important in-person interactions, but preserving a virtual avenue meant that we were still able to hear brilliant ideas from those around the world who might not have had the opportunity to contribute otherwise, and that’s what the EnergyHack is really about.”

    In fact, of the over 300 participants registered for the event, more than a third participated online, and two of the three grand prize winners participated entirely virtually. Teams of students at any degree level from any institution were welcome, and the event saw an incredible range of backgrounds and expertise, from undergraduates to MBAs, put their heads together to create innovative solutions.

    This year’s event was supported by a host of energy partners both in industry and within MIT. The MIT Energy and Climate Club worked with sponsoring organizations Smartflower, Chargepoint, Edison Energy, Line Vision, Chevron, Shell, and Sterlite Power to develop seven problem statements for hackers, with each judged by representatives form their respective organization. The challenges ranged from envisioning the future of electric vehicle fueling to quantifying the social and environmental benefits of renewable energy projects.

    Hackers had 36 hours to come up with a solution to one challenge, and teams then presented these solutions in a short pitch to a judging panel. Finalists from each challenge progressed to the final judging round to pitch against each other in pursuit of three grand prizes. Team COPrs came in third, receiving $1,000 for their solution to the Line Vision challenge; Crown Joules snagged second place and $1,500 for their approach to the Chargepoint problem; and Feel AMPowered took out first place and $2,000 for their innovative solution to the Smartflower challenge.

    In addition to a new format, this year’s EnergyHack also featured a new emphasis on climate change impacts and the energy transition. According to Arina Khotimsky, co-managing director of EnergyHack 2021, “Moving forward after this year’s rebranding of the MIT Energy and Climate Club, we were hoping to carry this aim to EnergyHack. It was incredibly exciting to have ChargePoint and SmartFlower leading as our Sustainability Circle-tier sponsors and bringing their impactful innovations to the conversations at EnergyHack 2021.”

    To the organizing team, whose members from sophomores to MBAs, this aspect of the event was especially important, and their hope was for the event to inspire a generation of young energy and climate leaders — a hope, according to them, that seems to have been fulfilled.

    “I was floored by the positive feedback we received from hackers, both in-person and virtual, about how much they enjoyed the hackathon,” says Graham. “It’s all thanks to our team of incredibly hardworking organizing directors who made EnergyHack 2021 what it was. It was incredibly rewarding seeing everyone’s impact on the event, and we are looking forward to seeing how it evolves in the future.”­­­ More

  • in

    Timber or steel? Study helps builders reduce carbon footprint of truss structures

    Buildings are a big contributor to global warming, not just in their ongoing operations but in the materials used in their construction. Truss structures — those crisscross arrays of diagonal struts used throughout modern construction, in everything from antenna towers to support beams for large buildings — are typically made of steel or wood or a combination of both. But little quantitative research has been done on how to pick the right materials to minimize these structures’ contribution global warming.

    The “embodied carbon” in a construction material includes the fuel used in the material’s production (for mining and smelting steel, for example, or for felling and processing trees) and in transporting the materials to a site. It also includes the equipment used for the construction itself.

    Now, researchers at MIT have done a detailed analysis and created a set of computational tools to enable architects and engineers to design truss structures in a way that can minimize their embodied carbon while maintaining all needed properties for a given building application. While in general wood produces a much lower carbon footprint, using steel in places where its properties can provide maximum benefit can provide an optimized result, they say.

    The analysis is described in a paper published today in the journal Engineering Structures, by graduate student Ernest Ching and MIT assistant professor of civil and environmental engineering Josephine Carstensen.

    “Construction is a huge greenhouse gas emitter that has kind of been flying under the radar for the past decades,” says Carstensen. But in recent years building designers “are starting to be more focused on how to not just reduce the operating energy associated with building use, but also the important carbon associated with the structure itself.” And that’s where this new analysis comes in.

    The two main options in reducing the carbon emissions associated with truss structures, she says, are substituting materials or changing the structure. However, there has been “surprisingly little work” on tools to help designers figure out emissions-minimizing strategies for a given situation, she says.

    The new system makes use of a technique called topology optimization, which allows for the input of basic parameters, such as the amount of load to be supported and the dimensions of the structure, and can be used to produce designs optimized for different characteristics, such as weight, cost, or, in this case, global warming impact.

    Wood performs very well under forces of compression, but not as well as steel when it comes to tension — that is, a tendency to pull the structure apart. Carstensen says that in general, wood is far better than steel in terms of embedded carbon, so “especially if you have a structure that doesn’t have any tension, then you should definitely only use timber” in order to minimize emissions. One tradeoff is that “the weight of the structure is going to be bigger than it would be with steel,” she says.

    The tools they developed, which were the basis for Ching’s master’s thesis, can be applied at different stages, either in the early planning phase of a structure, or later on in the final stages of a design.

    As an exercise, the team developed a proposal for reengineering several trusses using these optimization tools, and demonstrated that a significant savings in embodied greenhouse gas emissions could be achieved with no loss of performance. While they have shown improvements of at least 10 percent can be achieved, she says those estimates are “not exactly apples to apples” and likely savings could actually be two to three times that.

    “It’s about choosing materials more smartly,” she says, for the specifics of a given application. Often in existing buildings “you will have timber where there’s compression, and where that makes sense, and then it will have really skinny steel members, in tension, where that makes sense. And that’s also what we see in our design solutions that are suggested, but perhaps we can see it even more clearly.” The tools are not ready for commercial use though, she says, because they haven’t yet added a user interface.

    Carstensen sees a trend to increasing use of timber in large construction, which represents an important potential for reducing the world’s overall carbon emissions. “There’s a big interest in the construction industry in mass timber structures, and this speaks right into that area. So, the hope is that this would make inroads into the construction business and actually make a dent in that very large contribution to greenhouse gas emissions.” More

  • in

    The reasons behind lithium-ion batteries’ rapid cost decline

    Lithium-ion batteries, those marvels of lightweight power that have made possible today’s age of handheld electronics and electric vehicles, have plunged in cost since their introduction three decades ago at a rate similar to the drop in solar panel prices, as documented by a study published last March. But what brought about such an astonishing cost decline, of about 97 percent?

    Some of the researchers behind that earlier study have now analyzed what accounted for the extraordinary savings. They found that by far the biggest factor was work on research and development, particularly in chemistry and materials science. This outweighed the gains achieved through economies of scale, though that turned out to be the second-largest category of reductions.

    The new findings are being published today in the journal Energy and Environmental Science, in a paper by MIT postdoc Micah Ziegler, recent graduate student Juhyun Song PhD ’19, and Jessika Trancik, a professor in MIT’s Institute for Data, Systems and Society.

    The findings could be useful for policymakers and planners to help guide spending priorities in order to continue the pathway toward ever-lower costs for this and other crucial energy storage technologies, according to Trancik. Their work suggests that there is still considerable room for further improvement in electrochemical battery technologies, she says.

    The analysis required digging through a variety of sources, since much of the relevant information consists of closely held proprietary business data. “The data collection effort was extensive,” Ziegler says. “We looked at academic articles, industry and government reports, press releases, and specification sheets. We even looked at some legal filings that came out. We had to piece together data from many different sources to get a sense of what was happening.” He says they collected “about 15,000 qualitative and quantitative data points, across 1,000 individual records from approximately 280 references.”

    Data from the earliest times are hardest to access and can have the greatest uncertainties, Trancik says, but by comparing different data sources from the same period they have attempted to account for these uncertainties.

    Overall, she says, “we estimate that the majority of the cost decline, more than 50 percent, came from research-and-development-related activities.” That included both private sector and government-funded research and development, and “the vast majority” of that cost decline within that R&D category came from chemistry and materials research.

    That was an interesting finding, she says, because “there were so many variables that people were working on through very different kinds of efforts,” including the design of the battery cells themselves, their manufacturing systems, supply chains, and so on. “The cost improvement emerged from a diverse set of efforts and many people, and not from the work of only a few individuals.”

    The findings about the importance of investment in R&D were especially significant, Ziegler says, because much of this investment happened after lithium-ion battery technology was commercialized, a stage at which some analysts thought the research contribution would become less significant. Over roughly a 20-year period starting five years after the batteries’ introduction in the early 1990s, he says, “most of the cost reduction still came from R&D. The R&D contribution didn’t end when commercialization began. In fact, it was still the biggest contributor to cost reduction.”

    The study took advantage of an analytical approach that Trancik and her team initially developed to analyze the similarly precipitous drop in costs of silicon solar panels over the last few decades. They also applied the approach to understand the rising costs of nuclear energy. “This is really getting at the fundamental mechanisms of technological change,” she says. “And we can also develop these models looking forward in time, which allows us to uncover the levers that people could use to improve the technology in the future.”

    One advantage of the methodology Trancik and her colleagues have developed, she says, is that it helps to sort out the relative importance of different factors when many variables are changing all at once, which typically happens as a technology improves. “It’s not simply adding up the cost effects of these variables,” she says, “because many of these variables affect many different cost components. There’s this kind of intricate web of dependencies.” But the team’s methodology makes it possible to “look at how that overall cost change can be attributed to those variables, by essentially mapping out that network of dependencies,” she says.

    This can help provide guidance on public spending, private investments, and other incentives. “What are all the things that different decision makers could do?” she asks. “What decisions do they have agency over so that they could improve the technology, which is important in the case of low-carbon technologies, where we’re looking for solutions to climate change and we have limited time and limited resources? The new approach allows us to potentially be a bit more intentional about where we make those investments of time and money.”

    “This paper collects data available in a systematic way to determine changes in the cost components of lithium-ion batteries between 1990-1995 and 2010-2015,” says Laura Diaz Anadon, a professor of climate change policy at Cambridge University, who was not connected to this research. “This period was an important one in the history of the technology, and understanding the evolution of cost components lays the groundwork for future work on mechanisms and could help inform research efforts in other types of batteries.”

    The research was supported by the Alfred P. Sloan Foundation, the Environmental Defense Fund, and the MIT Technology and Policy Program. More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More

  • in

    At UN climate change conference, trying to “keep 1.5 alive”

    After a one-year delay caused by the Covid-19 pandemic, negotiators from nearly 200 countries met this month in Glasgow, Scotland, at COP26, the United Nations climate change conference, to hammer out a new global agreement to reduce greenhouse gas emissions and prepare for climate impacts. A delegation of approximately 20 faculty, staff, and students from MIT was on hand to observe the negotiations, share and conduct research, and launch new initiatives.

    On Saturday, Nov. 13, following two weeks of negotiations in the cavernous Scottish Events Campus, countries’ representatives agreed to the Glasgow Climate Pact. The pact reaffirms the goal of the 2015 Paris Agreement “to pursue efforts” to limit the global average temperature increase to 1.5 degrees Celsius above preindustrial levels, and recognizes that achieving this goal requires “reducing global carbon dioxide emissions by 45 percent by 2030 relative to the 2010 level and to net zero around mid-century.”

    “On issues like the need to reach net-zero emissions, reduce methane pollution, move beyond coal power, and tighten carbon accounting rules, the Glasgow pact represents some meaningful progress, but we still have so much work to do,” says Maria Zuber, MIT’s vice president for research, who led the Institute’s delegation to COP26. “Glasgow showed, once again, what a wicked complex problem climate change is, technically, economically, and politically. But it also underscored the determination of a global community of people committed to addressing it.”

    An “ambition gap”

    Both within the conference venue and at protests that spilled through the streets of Glasgow, one rallying cry was “keep 1.5 alive.” Alok Sharma, who was appointed by the UK government to preside over COP26, said in announcing the Glasgow pact: “We can now say with credibility that we have kept 1.5 degrees alive. But, its pulse is weak and it will only survive if we keep our promises and translate commitments into rapid action.”

    In remarks delivered during the first week of the conference, Sergey Paltsev, deputy director of MIT’s Joint Program on the Science and Policy of Global Change, presented findings from the latest MIT Global Change Outlook, which showed a wide gap between countries’ nationally determined contributions (NDCs) — the UN’s term for greenhouse gas emissions reduction pledges — and the reductions needed to put the world on track to meet the goals of the Paris Agreement and, now, the Glasgow pact.

    Pointing to this ambition gap, Paltsev called on all countries to do more, faster, to cut emissions. “We could dramatically reduce overall climate risk through more ambitious policy measures and investments,” says Paltsev. “We need to employ an integrated approach of moving to zero emissions in energy and industry, together with sustainable development and nature-based solutions, simultaneously improving human well-being and providing biodiversity benefits.”

    Finalizing the Paris rulebook

    A key outcome of COP26 (COP stands for “conference of the parties” to the UN Framework Convention on Climate Change, held for the 26th time) was the development of a set of rules to implement Article 6 of the Paris Agreement, which provides a mechanism for countries to receive credit for emissions reductions that they finance outside their borders, and to cooperate by buying and selling emissions reductions on international carbon markets.

    An agreement on this part of the Paris “rulebook” had eluded negotiators in the years since the Paris climate conference, in part because negotiators were concerned about how to prevent double-counting, wherein both buyers and sellers would claim credit for the emissions reductions.

    Michael Mehling, the deputy director of MIT’s Center for Energy and Environmental Policy Research (CEEPR) and an expert on international carbon markets, drew on a recent CEEPR working paper to describe critical negotiation issues under Article 6 during an event at the conference on Nov. 10 with climate negotiators and private sector representatives.

    He cited research that finds that Article 6, by leveraging the cost-efficiency of global carbon markets, could cut in half the cost that countries would incur to achieve their nationally determined contributions. “Which, seen from another angle, means you could double the ambition of these NDCs at no additional cost,” Mehling noted in his talk, adding that, given the persistent ambition gap, “any such opportunity is bitterly needed.”

    Andreas Haupt, a graduate student in the Institute for Data, Systems, and Society, joined MIT’s COP26 delegation to follow Article 6 negotiations. Haupt described the final days of negotiations over Article 6 as a “roller coaster.” Once negotiators reached an agreement, he says, “I felt relieved, but also unsure how strong of an effect the new rules, with all their weaknesses, will have. I am curious and hopeful regarding what will happen in the next year until the next large-scale negotiations in 2022.”

    Nature-based climate solutions

    World leaders also announced new agreements on the sidelines of the formal UN negotiations. One such agreement, a declaration on forests signed by more than 100 countries, commits to “working collectively to halt and reverse forest loss and land degradation by 2030.”

    A team from MIT’s Environmental Solutions Initiative (ESI), which has been working with policymakers and other stakeholders on strategies to protect tropical forests and advance other nature-based climate solutions in Latin America, was at COP26 to discuss their work and make plans for expanding it.

    Marcela Angel, a research associate at ESI, moderated a panel discussion featuring John Fernández, professor of architecture and ESI’s director, focused on protecting and enhancing natural carbon sinks, particularly tropical forests such as the Amazon that are at risk of deforestation, forest degradation, and biodiversity loss.

    “Deforestation and associated land use change remain one of the main sources of greenhouse gas emissions in most Amazonian countries, such as Brazil, Peru, and Colombia,” says Angel. “Our aim is to support these countries, whose nationally determined contributions depend on the effectiveness of policies to prevent deforestation and promote conservation, with an approach based on the integration of targeted technology breakthroughs, deep community engagement, and innovative bioeconomic opportunities for local communities that depend on forests for their livelihoods.”

    Energy access and renewable energy

    Worldwide, an estimated 800 million people lack access to electricity, and billions more have only limited or erratic electrical service. Providing universal access to energy is one of the UN’s sustainable development goals, creating a dual challenge: how to boost energy access without driving up greenhouse gas emissions.

    Rob Stoner, deputy director for science and technology of the MIT Energy Initiative (MITEI), and Ignacio Pérez-Arriaga, a visiting professor at the Sloan School of Management, attended COP26 to share their work as members of the Global Commission to End Energy Poverty, a collaboration between MITEI and the Rockefeller Foundation. It brings together global energy leaders from industry, the development finance community, academia, and civil society to identify ways to overcome barriers to investment in the energy sectors of countries with low energy access.

    The commission’s work helped to motivate the formation, announced at COP26 on Nov. 2, of the Global Energy Alliance for People and Planet, a multibillion-dollar commitment by the Rockefeller and IKEA foundations and Bezos Earth Fund to support access to renewable energy around the world.

    Another MITEI member of the COP26 delegation, Martha Broad, the initiative’s executive director, spoke about MIT research to inform the U.S. goal of scaling offshore wind energy capacity from approximately 30 megawatts today to 30 gigawatts by 2030, including significant new capacity off the coast of New England.

    Broad described research, funded by MITEI member companies, on a coating that can be applied to the blades of wind turbines to prevent icing that would require the turbines’ shutdown; the use of machine learning to inform preventative turbine maintenance; and methodologies for incorporating the effects of climate change into projections of future wind conditions to guide wind farm siting decisions today. She also spoke broadly about the need for public and private support to scale promising innovations.

    “Clearly, both the public sector and the private sector have a role to play in getting these technologies to the point where we can use them in New England, and also where we can deploy them affordably for the developing world,” Broad said at an event sponsored by America Is All In, a coalition of nonprofit and business organizations.

    Food and climate alliance

    Food systems around the world are increasingly at risk from the impacts of climate change. At the same time, these systems, which include all activities from food production to consumption and food waste, are responsible for about one-third of the human-caused greenhouse gas emissions warming the planet.

    At COP26, MIT’s Abdul Latif Jameel Water and Food Systems Lab announced the launch of a new alliance to drive research-based innovation that will make food systems more resilient and sustainable, called the Food and Climate Systems Transformation (FACT) Alliance. With 16 member institutions, the FACT Alliance will better connect researchers to farmers, food businesses, policymakers, and other food systems stakeholders around the world.

    Looking ahead

    By the end of 2022, the Glasgow pact asks countries to revisit their nationally determined contributions and strengthen them to bring them in line with the temperature goals of the Paris Agreement. The pact also “notes with deep regret” the failure of wealthier countries to collectively provide poorer countries $100 billion per year in climate financing that they pledged in 2009 to begin in 2020.

    These and other issues will be on the agenda for COP27, to be held in Sharm El-Sheikh, Egypt, next year.

    “Limiting warming to 1.5 degrees is broadly accepted as a critical goal to avoiding worsening climate consequences, but it’s clear that current national commitments will not get us there,” says ESI’s Fernández. “We will need stronger emissions reductions pledges, especially from the largest greenhouse gas emitters. At the same time, expanding creativity, innovation, and determination from every sector of society, including research universities, to get on with real-world solutions is essential. At Glasgow, MIT was front and center in energy systems, cities, nature-based solutions, and more. The year 2030 is right around the corner so we can’t afford to let up for one minute.” More

  • in

    MIT makes strides on climate action plan

    Two recent online events related to MIT’s ambitious new climate action plan highlighted several areas of progress, including uses of the campus as a real-life testbed for climate impact research, the creation of new planning bodies with opportunities for input from all parts of the MIT community, and a variety of moves toward reducing the Institute’s own carbon footprint in ways that may also provide a useful model for others.

    On Monday, MIT’s Office of Sustainability held its seventh annual “Sustainability Connect” event, bringing together students, faculty, staff, and alumni to learn about and share ideas for addressing climate change. This year’s virtual event emphasized the work toward carrying out the climate plan, titled “Fast Forward: MIT’s Climate Action Plan for the Decade,” which was announced in May. An earlier event, the “MIT Climate Tune-in” on Nov. 3, provided an overview of the many areas of MIT’s work to tackle climate change and featured a video message from Maria Zuber, MIT’s vice president for research, who was attending the COP26 international climate meeting in Glasgow, Scotland, as part of an 18-member team from MIT.

    Zuber pointed out some significant progress that was made at the conference, including a broad agreement by over 100 nations to end deforestation by the end of the decade; she also noted that the U.S. and E.U. are leading a global coalition of countries committed to curbing methane emissions by 30 percent from 2020 levels by decade’s end. “It’s easy to be pessimistic,” she said, “but being here in Glasgow, I’m actually cautiously optimistic, seeing the thousands and thousands of people here who are working toward meaningful climate action. And I know that same spirit exists on our own campus also.”

    As for MIT’s own climate plan, Zuber emphasized three points: “We’re committed to action; second of all, we’re committed to moving fast; and third, we’ve organized ourselves better for success.” That organization includes the creation of the MIT Climate Steering Committee, to oversee and coordinate MIT’s strategies on climate change; the Climate Nucleus, to oversee the management and implementation of the new plan; and three working groups that are forming now, to involve all parts of the MIT community.

    The “Fast Forward” plan calls for reducing the campus’s net greenhouse gas emissions to zero by 2026 and eliminating all such emissions, including indirect ones, by 2050. At Monday’s event, Director of Sustainability Julie Newman pointed out that the climate plan includes no less than 14 specific commitments related to the campus itself. These can be grouped into five broad areas, she said: mitigation, resiliency, electric vehicle infrastructure, investment portfolio sustainability, and climate leadership. “Each of these commitments has due dates, and they range from the tactical to the strategic,” she said. “We’re in the midst of activating our internal teams” to address these commitments, she added, noting that there are 30 teams that involve 75 faculty and researcher members, plus up to eight student positions.

    One specific project that is well underway involves preparing a detailed map of the flood risks to the campus as sea levels rise and storm surges increase. While previous attempts to map out the campus flooding risks had treated buildings essentially as uniform blocks, the new project has already mapped out in detail the location, elevation, and condition of every access point — doors, windows, and drains — in every building in the main campus, and now plans to extend the work to the residence buildings and outlying parts of campus. The project’s methods for identifying and quantifying the risks to specific parts of the campus, Newman said, represents “part of our mission for leveraging the campus as a test bed” by creating a map that is “true to the nature of the topography and the infrastructure,” in order to be prepared for the effects of climate change.

    Also speaking at the Sustainability Connect event, Vice President for Campus Services and Stewardship Joe Higgins outlined a variety of measures that are underway to cut the carbon footprint of the campus as much as possible, as quickly as possible. Part of that, he explained, involves using the campus as a testbed for the development of the equivalent of a “smart thermostat” system for campus buildings. While such products exist commercially for homeowners, there is no such system yet for large institutional or commercial buildings.

    There is a team actively developing such a pilot program in some MIT buildings, he said, focusing on some large lab buildings that have especially high energy usage. They are examining the use of artificial intelligence to reduce energy consumption, he noted. By adding systems to monitor energy use, temperatures, occupancy, and so on, and to control heating, lighting and air conditioning systems, Higgins said at least a 3 to 5 percent reduction in energy use can be realized. “It may be well beyond that,” he added. “There’s a huge opportunity here.”

    Higgins also outlined the ongoing plan to convert the existing steam distribution system for campus heating into a hot water system. Though the massive undertaking may take decades to complete, he said that project alone may reduce campus carbon emissions by 10 percent. Other efforts include the installation of an additional 400 kilowatts of rooftop solar installations.

    Jeremy Gregory, executive director of MIT’s climate and sustainability consortium, described efforts to deal with the most far-reaching areas of greenhouse gas emission, the so-called Scope 3 emissions. He explained that Scope 1 is the direct emissions from the campus itself, from buildings and vehicles; Scope 2 includes indirect emissions from the generation of electricity; and Scope 3 is “everything else.” That includes employee travel, buildings that MIT leases from others and to others, and all goods and services, he added, “so it includes a lot of different categories of emissions.” Gregory said his team, including several student fellows, is actively investigating and quantifying these Scope 3 emissions at MIT, along with potential methods of reducing them.

    Professor Noelle Selin, who was recently named as co-chair of the new Climate Nucleus along with Professor Anne White, outlined their plans for the coming year, including the setting up of the three working groups.

    Selin said the nucleus consists of representatives of departments, labs, centers, and institutes that have significant responsibilities under the climate plan. That body will make recommendations to the steering committee, which includes the deans of all five of MIT’s schools and the MIT Schwarzman College of Computing, “about how to amplify MIT’s impact in the climate sphere. We have an implementation role, but we also have an accelerator pedal that can really make MIT’s climate impact more ambitious, and really push the buttons and make sure that the Institute’s commitments are actually borne out in reality.”

    The MIT Climate Tune-In also featured Selin and White, as well as a presentation on MIT’s expanded educational offerings on climate and sustainability, from Sarah Meyers, ESI’s education program manager; students Derek Allmond and Natalie Northrup; and postdoc Peter Godart. Professor Dennis Whyte also spoke about MIT and Commonwealth Fusion Systems’ recent historical advance toward commercial fusion energy. Organizers said that the Climate Tune-In event is the first of what they hope will be many opportunities to hear updates on the wide range of work happening across campus to implement the Fast Forward plan, and to spark conversations within the MIT community. More

  • in

    Radio-frequency wave scattering improves fusion simulations

    In the quest for fusion energy, understanding how radio-frequency (RF) waves travel (or “propagate”) in the turbulent interior of a fusion furnace is crucial to maintaining an efficient, continuously operating power plant. Transmitted by an antenna in the doughnut-shaped vacuum chamber common to magnetic confinement fusion devices called tokamaks, RF waves heat the plasma fuel and drive its current around the toroidal interior. The efficiency of this process can be affected by how the wave’s trajectory is altered (or “scattered”) by conditions within the chamber.

    Researchers have tried to study these RF processes using computer simulations to match the experimental conditions. A good match would validate the computer model, and raise confidence in using it to explore new physics and design future RF antennas that perform efficiently. While the simulations can accurately calculate how much total current is driven by RF waves, they do a poor job at predicting where exactly in the plasma this current is produced.

    Now, in a paper published in the Journal of Plasma Physics, MIT researchers suggest that the models for RF wave propagation used for these simulations have not properly taken into account the way these waves are scattered as they encounter dense, turbulent filaments present in the edge of the plasma known as the “scrape-off layer” (SOL).

    Bodhi Biswas, a graduate student at the Plasma Science and Fusion Center (PSFC) under the direction of Senior Research Scientist Paul Bonoli, School of Engineering Distinguished Professor of Engineering Anne White, and Principal Research Scientist Abhay Ram, who is the paper’s lead author. Ram compares the scattering that occurs in this situation to a wave of water hitting a lily pad: “The wave crashing with the lily pad will excite a secondary, scattered wave that makes circular ripples traveling outward from the plant. The incoming wave has transferred energy to the scattered wave. Some of this energy is reflected backwards (in relation to the incoming wave), some travels forwards, and some is deflected to the side. The specifics all depend on the particular attributes of the wave, the water, and the lily pad. In our case, the lily pad is the plasma filament.”

    Until now, researchers have not properly taken these filaments and the scattering they provoke into consideration when modeling the turbulence inside a tokamak, leading to an underestimation of wave scattering. Using data from PSFC tokamak Alcator C-Mod, Biswas shows that using the new method of modeling RF-wave scattering from SOL turbulence provides results considerably different from older models, and a much better match to experiments. Notably, the “lower-hybrid” wave spectrum, crucial to driving plasma current in a steady-state tokamak, appears to scatter asymmetrically, an important effect not accounted for in previous models.

    Biswas’s advisor Paul Bonoli is well acquainted with traditional “ray-tracing” models, which evaluate a wave trajectory by dividing it into a series of rays. He has used this model, with its limitations, for decades in his own research to understand plasma behavior. Bonoli says he is pleased that “the research results in Bodhi’s doctoral thesis have refocused attention on the profound effect that edge turbulence can have on the propagation and absorption of radio-frequency power.”

    Although ray-tracing treatments of scattering do not fully capture all the wave physics, a “full-wave” model that does would be prohibitively expensive. To solve the problem economically, Biswas splits his analysis into two parts: (1) using ray tracing to model the trajectory of the wave in the tokamak assuming no turbulence, while (2) modifying this ray-trajectory with the new scattering model that accounts for the turbulent plasma filaments.

    “This scattering model is a full-wave model, but computed over a small region and in a simplified geometry so that it is very quick to do,” says Biswas. “The result is a ray-tracing model that, for the first time, accounts for full-wave scattering physics.”

    Biswas notes that this model bridges the gap between simple scattering models that fail to match experiment and full-wave models that are prohibitively expensive, providing reasonable accuracy at low cost.

    “Our results suggest scattering is an important effect, and that it must be taken into account when designing future RF antennas. The low cost of our scattering model makes this very doable.”

    “This is exciting progress,” says Syun’ichi Shiraiwa, staff research physicist at the Princeton Plasma Physics Laboratory. “I believe that Bodhi’s work provides a clear path to the end of a long tunnel we have been in. His work not only demonstrates that the wave scattering, once accurately accounted for, can explain the experimental results, but also answers a puzzling question: why previous scattering models were incomplete, and their results unsatisfying.”

    Work is now underway to apply this model to more plasmas from Alcator C-Mod and other tokamaks. Biswas believes that this new model will be particularly applicable to high-density tokamak plasmas, for which the standard ray-tracing model has been noticeably inaccurate. He is also excited that the model could be validated by DIII-D National Fusion Facility, a fusion experiment on which the PSFC collaborates.

    “The DIII-D tokamak will soon be capable of launching lower hybrid waves and measuring its electric field in the scrape-off layer. These measurements could provide direct evidence of the asymmetric scattering effect predicted by our model.” More

  • in

    Q&A: Options for the Diablo Canyon nuclear plant

    The Diablo Canyon nuclear plant in California, the only one still operating in the state, is set to close in 2025. A team of researchers at MIT’s Center for Advanced Nuclear Energy Systems, Abdul Latif Jameel Water and Food Systems Lab, and Center for Energy and Environmental Policy Research; Stanford’s Precourt Energy Institute; and energy analysis firm LucidCatalyst LLC have analyzed the potential benefits the plant could provide if its operation were extended to 2030 or 2045.

    They found that this nuclear plant could simultaneously help to stabilize the state’s electric grid, provide desalinated water to supplement the state’s chronic water shortages, and provide carbon-free hydrogen fuel for transportation. MIT News asked report co-authors Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering, and John Lienhard, the Jameel Professor of Water and Food, to discuss the group’s findings.

    Q: Your report suggests co-locating a major desalination plant alongside the existing Diablo Canyon power plant. What would be the potential benefits from operating a desalination plant in conjunction with the power plant?

    Lienhard: The cost of desalinated water produced at Diablo Canyon would be lower than for a stand-alone plant because the cost of electricity would be significantly lower and you could take advantage of the existing infrastructure for the intake of seawater and the outfall of brine. Electricity would be cheaper because the location takes advantage of Diablo Canyon’s unique capability to provide low cost, zero-carbon baseload power.

    Depending on the scale at which the desalination plant is built, you could make a very significant impact on the water shortfalls of state and federal projects in the area. In fact, one of the numbers that came out of this study was that an intermediate-sized desalination plant there would produce more fresh water than the highest estimate of the net yield from the proposed Delta Conveyance Project on the Sacramento River. You could get that amount of water at Diablo Canyon for an investment cost less than half as large, and without the associated impacts that would come with the Delta Conveyance Project.

    And the technology envisioned for desalination here, reverse osmosis, is available off the shelf. You can buy this equipment today. In fact, it’s already in use in California and thousands of other places around the world.

    Q: You discuss in the report three potential products from the Diablo Canyon plant:  desalinatinated water, power for the grid, and clean hydrogen. How well can the plant accommodate all of those efforts, and are there advantages to combining them as opposed to doing any one of them separately?

    Buongiorno: California, like many other regions in the world, is facing multiple challenges as it seeks to reduce carbon emissions on a grand scale. First, the wide deployment of intermittent energy sources such as solar and wind creates a great deal of variability on the grid that can be balanced by dispatchable firm power generators like Diablo. So, the first mission for Diablo is to continue to provide reliable, clean electricity to the grid.

    The second challenge is the prolonged drought and water scarcity for the state in general. And one way to address that is water desalination co-located with the nuclear plant at the Diablo site, as John explained.

    The third challenge is related to decarbonization the transportation sector. A possible approach is replacing conventional cars and trucks with vehicles powered by fuel cells which consume hydrogen. Hydrogen has to be produced from a primary energy source. Nuclear power, through a process called electrolysis, can do that quite efficiently and in a manner that is carbon-free.

    Our economic analysis took into account the expected revenue from selling these multiple products — electricity for the grid, hydrogen for the transportation sector, water for farmers or other local users — as well as the costs associated with deploying the new facilities needed to produce desalinated water and hydrogen. We found that, if Diablo’s operating license was extended until 2035, it would cut carbon emissions by an average of 7 million metric tons a year — a more than 11 percent reduction from 2017 levels — and save ratepayers $2.6 billion in power system costs.

    Further delaying the retirement of Diablo to 2045 would spare 90,000 acres of land that would need to be dedicated to renewable energy production to replace the facility’s capacity, and it would save ratepayers up to $21 billion in power system costs.

    Finally, if Diablo was operated as a polygeneration facility that provides electricity, desalinated water, and hydrogen simultaneously, its value, quantified in terms of dollars per unit electricity generated, could increase by 50 percent.

    Lienhard: Most of the desalination scenarios that we considered did not consume the full electrical output of that plant, meaning that under most scenarios you would continue to make electricity and do something with it, beyond just desalination. I think it’s also important to remember that this power plant produces 15 percent of California’s carbon-free electricity today and is responsible for 8 percent of the state’s total electrical production. In other words, Diablo Canyon is a very large factor in California’s decarbonization. When or if this plant goes offline, the near-term outcome is likely to be increased reliance on natural gas to produce electricity, meaning a rise in California’s carbon emissions.

    Q: This plant in particular has been highly controversial since its inception. What’s your assessment of the plant’s safety beyond its scheduled shutdown, and how do you see this report as contributing to the decision-making about that shutdown?

    Buongiorno: The Diablo Canyon Nuclear Power Plant has a very strong safety record. The potential safety concern for Diablo is related to its proximity to several fault lines. Being located in California, the plant was designed to withstand large earthquakes to begin with. Following the Fukushima accident in 2011, the Nuclear Regulatory Commission reviewed the plant’s ability to withstand external events (e.g., earthquakes, tsunamis, floods, tornadoes, wildfires, hurricanes) of exceptionally rare and severe magnitude. After nine years of assessment the NRC’s conclusion is that “existing seismic capacity or effective flood protection [at Diablo Canyon] will address the unbounded reevaluated hazards.” That is, Diablo was designed and built to withstand even the rarest and strongest earthquakes that are physically possible at this site.

    As an additional level of protection, the plant has been retrofitted with special equipment and procedures meant to ensure reliable cooling of the reactor core and spent fuel pool under a hypothetical scenario in which all design-basis safety systems have been disabled by a severe external event.

    Lienhard: As for the potential impact of this report, PG&E [the California utility] has already made the decision to shut down the plant, and we and others hope that decision will be revisited and reversed. We believe that this report gives the relevant stakeholders and policymakers a lot of information about options and value associated with keeping the plant running, and about how California could benefit from clean water and clean power generated at Diablo Canyon. It’s not up to us to make the decision, of course — that is a decision that must be made by the people of California. All we can do is provide information.

    Q: What are the biggest challenges or obstacles to seeing these ideas implemented?

    Lienhard: California has very strict environmental protection regulations, and it’s good that they do. One of the areas of great concern to California is the health of the ocean and protection of the coastal ecosystem. As a result, very strict rules are in place about the intake and outfall of both power plants and desalination plants, to protect marine life. Our analysis suggests that this combined plant can be implemented within the parameters prescribed by the California Ocean Plan and that it can meet the regulatory requirements.

    We believe that deeper analysis would be needed before you could proceed. You would need to do site studies and really get out into the water and look in detail at what’s there. But the preliminary analysis is positive. A second challenge is that the discourse in California around nuclear power has generally not been very supportive, and similarly some groups in California oppose desalination. We expect that that both of those points of view would be part of the conversation about whether or not to procede with this project.

    Q: How particular is this analysis to the specifics of this location? Are there aspects of it that apply to other nuclear plants, domestically or globally?

    Lienhard: Hundreds of nuclear plants around the world are situated along the coast, and many are in water stressed regions. Although our analysis focused on Diablo Canyon, we believe that the general findings are applicable to many other seaside nuclear plants, so that this approach and these conclusions could potentially be applied at hundreds of sites worldwide. More