More stories

  • in

    Q&A: Tod Machover on “Overstory Overture,” his new operatic work

    Composers find inspiration from many sources. For renowned MIT Media Lab composer Tod Machover, reading the Richard Powers novel “The Overstory” instantly made him want to adapt it as an operatic composition. This might not seem an obvious choice to some: “The Overstory” is about a group of people, including a wrongly maligned scientist, who band together to save a forest from destruction.

    But Machover’s resulting work, “Overstory Overture,” a 35-minute piece commissioned and performed by the chamber ensemble Sejong Soloists, has come to fruition and will have its world premiere on March 7 in Alice Tully Hall at New York’s Lincoln Center. Opera superstar Joyce DiDonato will have the lead role, with Earl Lee conducting. On March 16, the piece will have its second performance, in Seoul, South Korea. MIT News recently talked to Machover about his original new work.

    Q: How did you get the idea for your new work?

    A: I’ve been a fan of Richard Powers’ novels for a long time. He started out as a musician. He’s a cellist like I am, and was a composer before he was a writer, and he’s also been deeply interested in science for his whole career. All of his novels have something to do with people, ideas, music, and science. He’s always been on my radar.

    Q: What’s compelling to you about this particular Powers book?

    A: “The Overstory” is made up of many stories about characters who come together, improbably, because of trees. It starts with short chapters describing characters with relationships to trees. One is about a family that moved to the Midwest and planted a chestnut tree. It grows for 150 years and they take pictures every year, and it’s at the center of the family until it gets cut down in the 1990s. Another guy is in a plane in Vietnam and gets shot down, and his parachute gets caught in a tree right before he hits the ground.

    One character is named Patricia Westerford and she’s a scientist. Her life work is studying the forest and trees, and she discovers that trees communicate — both underground, through the roots, and through the air, via particles. They’re much more like a network than they are static, isolated objects. Her whole world is discovering the miracle of this network, but nobody believes her and she loses her tenure. And she basically goes and lives in the forest. Eventually all the characters in the book come together to preserve a forest in the Northwest that’s going to be destroyed. They become connected through trees, but in the book, all their lives are basically destroyed. It’s not a happy ending, but you understand how human beings are connected through the natural world, and have to think about this connection in a radically new way.

    Every single character came alive. The book is just a miracle. It’s a great work of art. Immediately, reading it, I thought, this is something I want to work on.

    Q: How did you start turning that into an operatic composition?

    A: I got in touch with Powers soon after that. Richard knew my music and answered immediately, saying, “I’d love to have you do an opera on this, and let’s figure out how.” I started working on it just before the pandemic. Around that time he came to Harvard to give a lecture, so he came here to my office in the Media Lab, and we got to chat.

    Generally novels leave more room for you to decide how to make music out of them; they’re a lot less scripted than a movie or a play, and the many inner thoughts and asides leave room for music to fill in. I asked Richard, “Would you be interested in writing the text for this?” And right away he said, “Look, I’d like to be involved in the process, but I don’t feel equipped to write a libretto.” So, I went to Simon Robson, who worked on “Schoenberg in Hollywood” [another Machover opera], and we started working and checked in with Richard from time to time.

    Just about that time the ensemble Sejong Soloists, who are based in New York and Seoul, offered to have their string orchestra collaborate on a project with a theatrical aspect, which was new for them. I explained I was working on an opera based on “The Overstory,” and I felt we could explore its themes. I could imagine the string instruments being like trees and the orchestra being the forest.

    The next thing I did was contact my favorite singer, Joyce DiDonato. She’s such a beautiful, powerful singer. I did an opera in 1999 for Houston called “Resurrection,” which was based on Tolstoy’s last novel, and we were casting the main female character. We did auditions in New York, Los Angeles, and Europe, couldn’t find the main character, and finally the head of the Houston Grand Opera said, “You know, there’s this young singer in our apprentice program who’s pretty special, and you should hear her.”

    And sure enough, that was Joyce. It was her first major role. We hadn’t done another project together although we remained close over the years, but I called her and said “Joyce, I know how busy you are, but I’ve got this idea, and I’ll send you the book. It’s great and I’d love to focus on this one character, would you consider doing it?” And she said she’d love to, partly because sustainability and the environment is something she really cares about.

    Q: Okay, but how do you get started writing music for a piece when it’s based on a book about trees?

    A: I began with two things. Musically I started with the idea of creating this language for tree communication. I was inspired by this idea that one of the reasons we don’t know about it is it’s underground, it’s low, it’s spreading out. I’m a cellist, and I’ve always loved music that grows from the bottom. When you play the cello, in a lot of the great literature, you’re playing the low part of a quartet or quintet or orchestra, and often people don’t quite hear it as the most prominent thing.

    The second thing I did was start making this text. Which was hard, because it’s a big novel. It’s a 35-minute piece where Joyce is at the center. When she starts, she just talks, for a minute, and then little by little it turns into song. It’s her sharing with everybody what she learned, she brings you into the world of the forest. In time, there’s a crisis, they’re destroying the forest, and as she says, they’re tearing out the lungs — tearing out the mind — of the world. The last part of the piece is a vision of how the trees need us but we need them even more.

    Q: I don’t want to push too hard on this, but the composition sounds parallel with its subject matter. Trees are connected; an orchestra is connected. And then this story is about people building a connection to nature, while you want the audience to feel a connection to the piece. How much did you think about it that way?

    A: I was thinking about that pretty consciously, and I really tried to make something that feels very still and simple, but where there’s a lot going on. It feels like it’s living and moving. The piece starts out with solo instruments, so at first everybody’s doing their bit, then they all join in. The strings make a rich ensemble sound, but in the last section every single instrument has its own part — I wrote an individual part for all these string players so they’re kind of weaving in and out. Musically it’s very much constructed to lead people through a forest that is both diverse but connected together.

    I also enjoy using electronics to add another dimension. In this piece I’ve tried to create an electronic world that doesn’t necessarily remind you of electronics, except for one part where machines comes in ripping the forest apart. But mostly the electronics are blended with the orchestra in a way you might not always notice. The sound and feel, hopefully, will appear more natural than nature.

    Q: You also seem to have clearly identified a story with real operatic drama here, unusual as it may be.

    A: The emotional transition that happens is the awareness of what the forest means, and in your gut what it means to protects it, and what it would mean to lose it, and then a glimpse of what it might feel like to live in a different way. I think the contribution someone like myself might be able to make is to change attitudes, to think about our limits as a species and as individuals. Technical solutions alone aren’t going to solve things; people’s behavior somehow has to change. A piece like this is a way of having the experience of crisis, and a vision of what could be different.

    Q: Here’s something a lot of us want to know: What’s it like working with Joyce DiDonato?

    A: She’s one of those rare people. She’s completely direct and honest and lives life to the fullest. Joyce, I mean, thank God she has the best voice you’ll ever hear and she’s at the top of her game, but she also thinks about the world and ideas, and she did a whole project a few years ago performing a repertoire around the world about war and peace, to jolt people into a new understanding. Every project she’s involved with, she cares about the characters and she’s in it all the way.

    For this piece we did a bunch of Zoom sessions and tried things out. And she’s fantastic at saying, “To make that phrase the best you can for my voice at this point in the piece, would you consider changing that one note?” She has incredibly precise ideas about that. So, we worked musically on every detail and on the whole shape. What a pleasure! She also came here to MIT. She hadn’t been to the Media Lab, so she spent two days here at the beginning of August with her partner. She was so open to all the students and all the ideas and inventions and machines and software, just in the most gracious and truly excited way. You couldn’t have had a better visitor.

    Q: Any last thoughts about this piece you want to share?

    A: In my music in general, I’m pretty voracious at combining different things. I think in this project where it involves the natural world and the language of trees, and the language of melodies and instruments and electronic music, there may be more elements I’ve pulled together than ever. The emotional and even musical world here is larger. That’s my story here: These elements require and invite new thinking. And remember: This is just the first part of a larger project. I hope that you can hear the full “Overstory” opera — perhaps with trees growing in a major opera house — in the not-so-distant future! More

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More

  • in

    Responsive design meets responsibility for the planet’s future

    MIT senior Sylas Horowitz kneeled at the edge of a marsh, tinkering with a blue-and-black robot about the size and shape of a shoe box and studded with lights and mini propellers.

    The robot was a remotely operated vehicle (ROV) — an underwater drone slated to collect water samples from beneath a sheet of Arctic ice. But its pump wasn’t working, and its intake line was clogged with sand and seaweed.

    “Of course, something must always go wrong,” Horowitz, a mechanical engineering major with minors in energy studies and environment and sustainability, later blogged about the Falmouth, Massachusetts, field test. By making some adjustments, Horowitz was able to get the drone functioning on site.

    Through a 2020 collaboration between MIT’s Department of Mechanical Engineering and the Woods Hole Oceanographic Institute (WHOI), Horowitz had been assembling and retrofitting the high-performance ROV to measure the greenhouse gases emitted by thawing permafrost.

    The Arctic’s permafrost holds an estimated 1,700 billion metric tons of methane and carbon dioxide — roughly 50 times the amount of carbon tied to fossil fuel emissions in 2019, according to climate research from NASA’s Jet Propulsion Laboratory. WHOI scientists wanted to understand the role the Arctic plays as a greenhouse gas source or sink.

    Horowitz’s ROV would be deployed from a small boat in sub-freezing temperatures to measure carbon dioxide and methane in the water. Meanwhile, a flying drone would sample the air.

    An MIT Student Sustainability Coalition leader and one of the first members of the MIT Environmental Solutions Initiative’s Rapid Response Group, Horowitz has focused on challenges related to clean energy, climate justice, and sustainable development.

    In addition to the ROV, Horowitz has tackled engineering projects through D-Lab, where community partners from around the world work with MIT students on practical approaches to alleviating global poverty. Horowitz worked on fashioning waste bins out of heat-fused recycled plastic for underserved communities in Liberia. Their thesis project, also initiated through D-Lab, is designing and building user-friendly, space- and fuel-efficient firewood cook stoves to improve the lives of women in Santa Catarina Palopó in northern Guatemala.

    Through the Tata-MIT GridEdge Solar Research program, they helped develop flexible, lightweight solar panels to mount on the roofs of street vendors’ e-rickshaws in Bihar, India.

    The thread that runs through Horowitz’s projects is user-centered design that creates a more equitable society. “In the transition to sustainable energy, we want our technology to adapt to the society that we live in,” they say. “Something I’ve learned from the D-Lab projects and also from the ROV project is that when you’re an engineer, you need to understand the societal and political implications of your work, because all of that should get factored into the design.”

    Horowitz describes their personal mission as creating systems and technology that “serve the well-being and longevity of communities and the ecosystems we exist within.

    “I want to relate mechanical engineering to sustainability and environmental justice,” they say. “Engineers need to think about how technology fits into the greater societal context of people in the environment. We want our technology to adapt to the society we live in and for people to be able, based on their needs, to interface with the technology.”

    Imagination and inspiration

    In Dix Hills, New York, a Long Island suburb, Horowitz’s dad is in banking and their mom is a speech therapist. The family hiked together, but Horowitz doesn’t tie their love for the natural world to any one experience. “I like to play in the dirt,” they say. “I’ve always had a connection to nature. It was a kind of childlike wonder.”

    Seeing footage of the massive 2010 oil spill in the Gulf of Mexico caused by an explosion on the Deepwater Horizon oil rig — which occurred when Horowitz was around 10 — was a jarring introduction to how human activity can impact the health of the planet.

    Their first interest was art — painting and drawing portraits, album covers, and more recently, digital images such as a figure watering a houseplant at a window while lightning flashes outside; a neon pink jellyfish in a deep blue sea; and, for an MIT-wide Covid quarantine project, two figures watching the sun set over a Green Line subway platform.

    Art dovetailed into a fascination with architecture, then shifted to engineering. In high school, Horowitz and a friend were co-captains of an all-girls robotics team. “It was just really wonderful, having this community and being able to build stuff,” they say. Horowitz and another friend on the team learned they were accepted to MIT on Pi Day 2018.

    Art, architecture, engineering — “it’s all kind of the same,” Horowitz says. “I like the creative aspect of design, being able to create things out of imagination.”

    Sustaining political awareness

    At MIT, Horowitz connected with a like-minded community of makers. They also launched themself into taking action against environmental injustice.

    In 2022, through the Student Sustainability Coalition (SSC), they encouraged MIT students to get involved in advocating for the Cambridge Green New Deal, legislation aimed at reducing emissions from new large commercial buildings such as those owned by MIT and creating a green jobs training program.

    In February 2022, Horowitz took part in a sit-in in Building 3 as part of MIT Divest, a student-led initiative urging the MIT administration to divest its endowment of fossil fuel companies.

    “I want to see MIT students more locally involved in politics around sustainability, not just the technology side,” Horowitz says. “I think there’s a lot of power from students coming together. They could be really influential.”

    User-oriented design

    The Arctic underwater ROV Horowitz worked on had to be waterproof and withstand water temperatures as low as 5 degrees Fahrenheit. It was tethered to a computer by a 150-meter-long cable that had to spool and unspool without tangling. The pump and tubing that collected water samples had to work without kinking.

    “It was cool, throughout the project, to think, ‘OK, what kind of needs will these scientists have when they’re out in these really harsh conditions in the Arctic? How can I make a machine that will make their field work easier?’

    “I really like being able to design things directly with the users, working within their design constraints,” they say.

    Inevitably, snafus occurred, but in photos and videos taken the day of the Falmouth field tests, Horowitz is smiling. “Here’s a fun unexpected (or maybe quite expected) occurrence!” they reported later. “The plastic mount for the shaft collar [used in the motor’s power transmission] ripped itself apart!” Undaunted, Horowitz jury-rigged a replacement out of sheet metal.

    Horowitz replaced broken wires in the winch-like device that spooled the cable. They added a filter at the intake to prevent sand and plants from clogging the pump.

    With a few more tweaks, the ROV was ready to descend into frigid waters. Last summer, it was successfully deployed on a field run in the Canadian high Arctic. A few months later, Horowitz was slated to attend OCEANS 2022 Hampton Roads, their first professional conference, to present a poster on their contribution to the WHOI permafrost research.

    Ultimately, Horowitz hopes to pursue a career in renewable energy, sustainable design, or sustainable agriculture, or perhaps graduate studies in data science or econometrics to quantify environmental justice issues such as the disproportionate exposure to pollution among certain populations and the effect of systemic changes designed to tackle these issues.

    After completing their degree this month, Horowitz will spend six months with MIT International Science and Technology Initiatives (MISTI), which fosters partnerships with industry leaders and host organizations around the world.

    Horowitz is thinking of working with a renewable energy company in Denmark, one of the countries they toured during a summer 2019 field trip led by the MIT Energy Initiative’s Director of Education Antje Danielson. They were particularly struck by Samsø, the world’s first carbon-neutral island, run entirely on renewable energy. “It inspired me to see what’s out there when I was a sophomore,” Horowitz says. They’re ready to see where inspiration takes them next.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Rescuing small plastics from the waste stream

    As plastic pollution continues to mount, with growing risks to ecosystems and wildlife, manufacturers are beginning to make ambitious commitments to keep new plastics out of the environment. A growing number have signed onto the U.S. Plastics Pact, which pledges to make 100 percent of plastic packaging reusable, recyclable, or compostable, and to see 50 percent of it effectively recycled or composted, by 2025.

    But for companies that make large numbers of small, disposable plastics, these pocket-sized objects are a major barrier to realizing their recycling goals.

    “Think about items like your toothbrush, your travel-size toothpaste tubes, your travel-size shampoo bottles,” says Alexis Hocken, a second-year PhD student in the MIT Department of Chemical Engineering. “They end up actually slipping through the cracks of current recycling infrastructure. So you might put them in your recycling bin at home, they might make it all the way to the sorting facility, but when it comes down to actually sorting them, they never make it into a recycled plastic bale at the very end of the line.”

    Now, a group of five consumer products companies is working with MIT to develop a sorting process that can keep their smallest plastic products inside the recycling chain. The companies — Colgate-Palmolive, Procter & Gamble, the Estée Lauder Companies, L’Oreal, and Haleon — all manufacture a large volume of “small format” plastics, or products less than two inches long in at least two dimensions. In a collaboration with Brad Olsen, the Alexander and I. Michael Kasser (1960) Professor of Chemical Engineering; Desiree Plata, an associate professor of civil and environmental engineering; the MIT Environmental Solutions Initiative; and the nonprofit The Sustainability Consortium, these companies are seeking a prototype sorting technology to bring to recycling facilities for large-scale testing and commercial development.

    Working in Olsen’s lab, Hocken is coming to grips with the complexity of the recycling systems involved. Material recovery facilities, or MRFs, are expected to handle products in any number of shapes, sizes, and materials, and sort them into a pure stream of glass, metal, paper, or plastic. Hocken’s first step in taking on the recycling project was to tour one of these MRFs in Portland, Maine, with Olsen and Plata.

    “We could literally see plastics just falling from the conveyor belts,” she says. “Leaving that tour, I thought, my gosh! There’s so much improvement that can be made. There’s so much impact that we can have on this industry.”

    From designing plastics to managing them

    Hocken always knew she wanted to work in engineering. Growing up in Scottsdale, Arizona, she was able to spend time in the workplace with her father, an electrical engineer who designs biomedical devices. “Seeing him working as an engineer, and how he’s solving these really important problems, definitely sparked my interest,” she says. “When it came time to begin my undergraduate degree, it was a really easy decision to choose engineering after seeing the day-to-day that my dad was doing in his career.”

    At Arizona State University, she settled on chemical engineering as a major and began working with polymers, coming up with combinations of additives for 3D plastics printing that could help fine-tune how the final products behaved. But even working with plastics every day, she rarely thought about the implications of her work for the environment.

    “And then in the spring of my final year at ASU, I took a class about polymers through the lens of sustainability, and that really opened my eyes,” Hocken remembers. The class was taught by Professor Timothy Long, director of the Biodesign Center for Sustainable Macromolecular Materials and Manufacturing and a well-known expert in the field of sustainable plastics. “That first session, where he laid out all of the really scary facts surrounding the plastics crisis, got me very motivated to look more into that field.”

    At MIT the next year, Hocken sought out Olsen as her advisor and made plastics sustainability her focus from the start.

    “Coming to MIT was my first time venturing outside of the state of Arizona for more than a three-month period,” she says. “It’s been really fun. I love living in Cambridge and the Boston area. I love my labmates. Everyone is so supportive, whether it’s to give me advice about some science that I’m trying to figure out, or just give me a pep talk if I’m feeling a little discouraged.”

    A challenge to recycle

    A lot of plastics research today is devoted to creating new materials — including biodegradable ones that are easier for natural ecosystems to absorb, and highly recyclable ones that hold their properties better after being melted down and recast.

    But Hocken also sees a huge need for better ways to handle the plastics we’re already making. “While biodegradable and sustainable polymers represent a very important route, and I think they should certainly be further pursued, we’re still a ways away from that being a reality universally across all plastic packaging,” she says. As long as large volumes of conventional plastic are coming out of factories, we’ll need innovative ways to stop it from piling onto the mountain of plastic pollution. In one of her projects, Hocken is trying to come up with new uses for recycled plastic that take advantage of its lost strength to produce a useful, flexible material similar to rubber.

    The small-format recycling project also falls in this category. The companies supporting the project have challenged the MIT team to work with their products exactly as currently manufactured — especially because their competitors use similar packaging materials that will also need to be covered by any solution the MIT team devises.

    The challenge is a large one. To kick the project off, the participating companies sent the MIT team a wide range of small-format products that need to make it through the sorting process. These include containers for lip balm, deodorant, pills, and shampoo, and disposable tools like toothbrushes and flossing picks. “A constraint, or problem I foresee, is just how variable the shapes are,” says Hocken. “A flossing pick versus a toothbrush are very different shapes.”

    Nor are they all made of the same kind of plastic. Many are made of polyethylene terephthalate (PET, type 1 in the recycling label system) or high-density polyethylene (HDPE, type 2), but nearly all of the seven recycling categories are represented among the sample products. The team’s solution will have to handle them all.

    Another obstacle is that the sorting process at a large MRF is already very complex and requires a heavy investment in equipment. The waste stream typically goes through a “glass breaker screen” that shatters glass and collects the shards; a series of rotating rubber stars to pull out two-dimensional objects, collecting paper and cardboard; a system of magnets and eddy currents to attract or repel different metals; and finally, a series of optical sorters that use infrared spectroscopy to identify the various types of plastics, then blow them down different chutes with jets of air. MRFs won’t be interested in adopting additional sorters unless they’re inexpensive and easy to fit into this elaborate stream.

    “We’re interested in creating something that could be retrofitted into current technology and current infrastructure,” Hocken says.

    Shared solutions

    “Recycling is a really good example of where pre-competitive collaboration is needed,” says Jennifer Park, collective action manager at The Sustainability Consortium (TSC), who has been working with corporate stakeholders on small format recyclability and helped convene the sponsors of this project and organize their contributions. “Companies manufacturing these products recognize that they cannot shift entire systems on their own. Consistency around what is and is not recyclable is the only way to avoid confusion and drive impact at scale.

    “Additionally, it is interesting that consumer packaged goods companies are sponsoring this research at MIT which is focused on MRF-level innovations. They’re investing in innovations that they hope will be adopted by the recycling industry to make progress on their own sustainability goals.”

    Hocken believes that, despite the challenges, it’s well worth pursuing a technology that can keep small-format plastics from slipping through MRFs’ fingers.

    “These are products that would be more recyclable if they were easier to sort,” she says. “The only thing that’s different is the size. So you can recycle both your large shampoo bottle and the small travel-size one at home, but the small one isn’t guaranteed to make it into a plastic bale at the end. If we can come up with a solution that specifically targets those while they’re still on the sorting line, they’re more likely to end up in those plastic bales at the end of the line, which can be sold to plastic reclaimers who can then use that material in new products.”

    “TSC is really excited about this project and our collaboration with MIT,” adds Park. “Our project stakeholders are very dedicated to finding a solution.”

    To learn more about this project, contact Christopher Noble, director of corporate engagement at the MIT Environmental Solutions Initiative. More

  • in

    To decarbonize the chemical industry, electrify it

    The chemical industry is the world’s largest industrial energy consumer and the third-largest source of industrial emissions, according to the International Energy Agency. In 2019, the industrial sector as a whole was responsible for 24 percent of global greenhouse gas emissions. And yet, as the world races to find pathways to decarbonization, the chemical industry has been largely untouched.

    “When it comes to climate action and dealing with the emissions that come from the chemical sector, the slow pace of progress is partly technical and partly driven by the hesitation on behalf of policymakers to overly impact the economic competitiveness of the sector,” says Dharik Mallapragada, a principal research scientist at the MIT Energy Initiative.

    With so many of the items we interact with in our daily lives — from soap to baking soda to fertilizer — deriving from products of the chemical industry, the sector has become a major source of economic activity and employment for many nations, including the United States and China. But as the global demand for chemical products continues to grow, so do the industry’s emissions.

    New sustainable chemical production methods need to be developed and deployed and current emission-intensive chemical production technologies need to be reconsidered, urge the authors of a new paper published in Joule. Researchers from DC-MUSE, a multi-institution research initiative, argue that electrification powered by low-carbon sources should be viewed more broadly as a viable decarbonization pathway for the chemical industry. In this paper, they shine a light on different potential methods to do just that.

    “Generally, the perception is that electrification can play a role in this sector — in a very narrow sense — in that it can replace fossil fuel combustion by providing the heat that the combustion is providing,” says Mallapragada, a member of DC-MUSE. “What we argue is that electrification could be much more than that.”

    The researchers outline four technological pathways — ranging from more mature, near-term options to less technologically mature options in need of research investment — and present the opportunities and challenges associated with each.

    The first two pathways directly replace fossil fuel-produced heat (which facilitates the reactions inherent in chemical production) with electricity or electrochemically generated hydrogen. The researchers suggest that both options could be deployed now and potentially be used to retrofit existing facilities. Electrolytic hydrogen is also highlighted as an opportunity to replace fossil fuel-produced hydrogen (a process that emits carbon dioxide) as a critical chemical feedstock. In 2020, fossil-based hydrogen supplied nearly all hydrogen demand (90 megatons) in the chemical and refining industries — hydrogen’s largest consumers.

    The researchers note that increasing the role of electricity in decarbonizing the chemical industry will directly affect the decarbonization of the power grid. They stress that to successfully implement these technologies, their operation must coordinate with the power grid in a mutually beneficial manner to avoid overburdening it. “If we’re going to be serious about decarbonizing the sector and relying on electricity for that, we have to be creative in how we use it,” says Mallapragada. “Otherwise we run the risk of having addressed one problem, while creating a massive problem for the grid in the process.”

    Electrified processes have the potential to be much more flexible than conventional fossil fuel-driven processes. This can reduce the cost of chemical production by allowing producers to shift electricity consumption to times when the cost of electricity is low. “Process flexibility is particularly impactful during stressed power grid conditions and can help better accommodate renewable generation resources, which are intermittent and are often poorly correlated with daily power grid cycles,” says Yury Dvorkin, an associate research professor at the Johns Hopkins Ralph O’Connor Sustainable Energy Institute. “It’s beneficial for potential adopters because it can help them avoid consuming electricity during high-price periods.”

    Dvorkin adds that some intermediate energy carriers, such as hydrogen, can potentially be used as highly efficient energy storage for day-to-day operations and as long-term energy storage. This would help support the power grid during extreme events when traditional and renewable generators may be unavailable. “The application of long-duration storage is of particular interest as this is a key enabler of a low-emissions society, yet not widespread beyond pumped hydro units,” he says. “However, as we envision electrified chemical manufacturing, it is important to ensure that the supplied electricity is sourced from low-emission generators to prevent emissions leakages from the chemical to power sector.” 

    The next two pathways introduced — utilizing electrochemistry and plasma — are less technologically mature but have the potential to replace energy- and carbon-intensive thermochemical processes currently used in the industry. By adopting electrochemical processes or plasma-driven reactions instead, chemical transformations can occur at lower temperatures and pressures, potentially enhancing efficiency. “These reaction pathways also have the potential to enable more flexible, grid-responsive plants and the deployment of modular manufacturing plants that leverage distributed chemical feedstocks such as biomass waste — further enhancing sustainability in chemical manufacturing,” says Miguel Modestino, the director of the Sustainable Engineering Initiative at the New York University Tandon School of Engineering.

    A large barrier to deep decarbonization of chemical manufacturing relates to its complex, multi-product nature. But, according to the researchers, each of these electricity-driven pathways supports chemical industry decarbonization for various feedstock choices and end-of-life disposal decisions. Each should be evaluated in comprehensive techno-economic and environmental life cycle assessments to weigh trade-offs and establish suitable cost and performance metrics.

    Regardless of the pathway chosen, the researchers stress the need for active research and development and deployment of these technologies. They also emphasize the importance of workforce training and development running in parallel to technology development. As André Taylor, the director of DC-MUSE, explains, “There is a healthy skepticism in the industry regarding electrification and adoption of these technologies, as it involves processing chemicals in a new way.” The workforce at different levels of the industry hasn’t necessarily been exposed to ideas related to the grid, electrochemistry, or plasma. The researchers say that workforce training at all levels will help build greater confidence in these different solutions and support customer-driven industry adoption.

    “There’s no silver bullet, which is kind of the standard line with all climate change solutions,” says Mallapragada. “Each option has pros and cons, as well as unique advantages. But being aware of the portfolio of options in which you can use electricity allows us to have a better chance of success and of reducing emissions — and doing so in a way that supports grid decarbonization.”

    This work was supported, in part, by the Alfred P. Sloan Foundation. More

  • in

    Chess players face a tough foe: air pollution

    Here’s something else chess players need to keep in check: air pollution.

    That’s the bottom line of a newly published study co-authored by an MIT researcher, showing that chess players perform objectively worse and make more suboptimal moves, as measured by a computerized analysis of their games, when there is more fine particulate matter in the air.

    More specifically, given a modest increase in fine particulate matter, the probability that chess players will make an error increases by 2.1 percentage points, and the magnitude of those errors increases by 10.8 percent. In this setting, at least, cleaner air leads to clearer heads and sharper thinking.

    “We find that when individuals are exposed to higher levels of air pollution, they make more more mistakes, and they make larger mistakes,” says Juan Palacios, an economist in MIT’s Sustainable Urbanization Lab, and co-author of a newly published paper detailing the study’s findings.

    The paper, “Indoor Air Quality and Strategic Decision-Making,” appears today in advance online form in the journal Management Science. The authors are Steffen Künn, an associate professor in the School of Business and Economics at Maastricht University, the Netherlands; Palacios, who is head of research in the Sustainable Urbanization Lab, in MIT’s Department of Urban Studies and Planning (DUSP); and Nico Pestel, an associate professor in the School of Business and Economics at Maastricht University.

    The toughest foe yet?

    Fine particulate matter refers to tiny particles 2.5 microns or less in diameter, notated as PM2.5. They are often associated with burning matter — whether through internal combustion engines in autos, coal-fired power plants, forest fires, indoor cooking through open fires, and more. The World Health Organization estimates that air pollution leads to over 4 million premature deaths worldwide every year, due to cancer, cardiovascular problems, and other illnesses.

    Scholars have produced many studies exploring the effects of air pollution on cognition. The current study adds to that literature by analyzing the subject in a particularly controlled setting. The researchers studied the performance of 121 chess players in three seven-round tournaments in Germany in 2017, 2018, and 2019, comprising more than 30,000 chess moves. The scholars used three web-connected sensors inside the tournament venue to measure carbon dioxide, PM2.5 concentrations, and temperature, all of which can be affected by external conditions, even in an indoor setting. Because each tournament lasted eight weeks, it was possible to examine how air-quality changes related to changes in player performance.

    In a replication exercise, the authors found the same impacts of air pollution on some of the strongest players in the history of chess using data from 20 years of games from the first division of the German chess league. 

    To evaluate the matter of performance of players, meanwhile, the scholars used software programs that assess each move made in each chess match, identify optimal decisions, and flag significant errors.

    During the tournaments, PM2.5 concentrations ranged from 14 to 70 micrograms per cubic meter of air, levels of exposure commonly found in cities in the U.S. and elsewhere. The researchers examined and ruled out alternate potential explanations for the dip in player performance, such as increased noise. They also found that carbon dioxide and temperature changes did not correspond to performance changes. Using the standardized ratings chess players earn, the scholars also accounted for the quality of opponents each player faced. Ultimately, the analysis using the plausibly random variation in pollution driven by changes in wind direction confirms that the findings are driven by the direct exposure to air particles.

    “It’s pure random exposure to air pollution that is driving these people’s performance,” Palacios says. “Against comparable opponents in the same tournament round, being exposed to different levels of air quality makes a difference for move quality and decision quality.”

    The researchers also found that when air pollution was worse, the chess players performed even more poorly when under time constraints. The tournament rules mandated that 40 moves had to be made within 110 minutes; for moves 31-40 in all the matches, an air pollution increase of 10 micrograms per cubic meter led to an increased probability of error of 3.2 percent, with the magnitude of those errors increasing by 17.3 percent.

    “We find it interesting that those mistakes especially occur in the phase of the game where players are facing time pressure,” Palacios says. “When these players do not have the ability to compensate [for] lower cognitive performance with greater deliberation, [that] is where we are observing the largest impacts.”

    “You can live miles away and be affected”

    Palacios emphasizes that, as the study indicates, air pollution may affect people in settings where they might not think it makes a difference.

    “It’s not like you have to live next to a power plant,” Palacios says. “You can live miles away and be affected.”

    And while the focus of this particular study is tightly focused on chess players, the authors write in the paper that the findings have “strong implications for high-skilled office workers,” who might also be faced with tricky cognitive tasks in conditions of variable air pollution. In this sense, Palacios says, “The idea is to provide accurate estimates to policymakers who are making difficult decisions about cleaning up the environment.”

    Indeed, Palacios observes, the fact that even chess players — who spend untold hours preparing themselves for all kinds of scenarios they may face in matches — can perform worse when air pollution rises suggests that a similar problem could affect people cognitively in many other settings.

    “There are more and more papers showing that there is a cost with air pollution, and there is a cost for more and more people,” Palacios says. “And this is just one example showing that even for these very [excellent] chess players, who think they can beat everything — well, it seems that with air pollution, they have an enemy who harms them.”

    Support for the study was provided, in part, by the Graduate School of Business and Economics at Maastricht, and the Institute for Labor Economics in Bonn, Germany. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Moving water and earth

    As a river cuts through a landscape, it can operate like a conveyer belt, moving truckloads of sediment over time. Knowing how quickly or slowly this sediment flows can help engineers plan for the downstream impact of restoring a river or removing a dam. But the models currently used to estimate sediment flow can be off by a wide margin.

    An MIT team has come up with a better formula to calculate how much sediment a fluid can push across a granular bed — a process known as bed load transport. The key to the new formula comes down to the shape of the sediment grains.

    It may seem intuitive: A smooth, round stone should skip across a river bed faster than an angular pebble. But flowing water also pushes harder on the angular pebble, which could erase the round stone’s advantage. Which effect wins? Existing sediment transport models surprisingly don’t offer an answer, mainly because the problem of measuring grain shape is too unwieldy: How do you quantify a pebble’s contours?

    The MIT researchers found that instead of considering a grain’s exact shape, they could boil the concept of shape down to two related properties: friction and drag. A grain’s drag, or resistance to fluid flow, relative to its internal friction, the resistance to sliding past other grains, can provide an easy way to gauge the effects of a grain’s shape.

    When they incorporated this new mathematical measure of grain shape into a standard model for bed load transport, the new formula made predictions that matched experiments that the team performed in the lab.

    “Sediment transport is a part of life on Earth’s surface, from the impact of storms on beaches to the gravel nests in mountain streams where salmon lay their eggs,” the team writes of their new study, appearing today in Nature. “Damming and sea level rise have already impacted many such terrains and pose ongoing threats. A good understanding of bed load transport is crucial to our ability to maintain these landscapes or restore them to their natural states.”

    The study’s authors are Eric Deal, Santiago Benavides, Qiong Zhang, Ken Kamrin, and Taylor Perron of MIT, and Jeremy Venditti and Ryan Bradley of Simon Fraser University in Canada.

    Figuring flow

    Video of glass spheres (top) and natural river gravel (bottom) undergoing bed load transport in a laboratory flume, slowed down 17x relative to real time. Average grain diameter is about 5 mm. This video shows how rolling and tumbling natural grains interact with one another in a way that is not possible for spheres. What can’t be seen so easily is that natural grains also experience higher drag forces from the flowing water than spheres do.

    Credit: Courtesy of the researchers

    Previous item
    Next item

    Bed load transport is the process by which a fluid such as air or water drags grains across a bed of sediment, causing the grains to hop, skip, and roll along the surface as a fluid flows through. This movement of sediment in a current is what drives rocks to migrate down a river and sand grains to skip across a desert.

    Being able to estimate bed load transport can help scientists prepare for situations such as urban flooding and coastal erosion. Since the 1930s, one formula has been the go-to model for calculating bed load transport; it’s based on a quantity known as the Shields parameter, after the American engineer who originally derived it. This formula sets a relationship between the force of a fluid pushing on a bed of sediment, and how fast the sediment moves in response. Albert Shields incorporated certain variables into this formula, including the average size and density of a sediment’s grains — but not their shape.

    “People may have backed away from accounting for shape because it’s one of these very scary degrees of freedom,” says Kamrin, a professor of mechanical engineering at MIT. “Shape is not a single number.”

    And yet, the existing model has been known to be off by a factor of 10 in its predictions of sediment flow. The team wondered whether grain shape could be a missing ingredient, and if so, how the nebulous property could be mathematically represented.

    “The trick was to focus on characterizing the effect that shape has on sediment transport dynamics, rather than on characterizing the shape itself,” says Deal.

    “It took some thinking to figure that out,” says Perron, a professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “But we went back to derive the Shields parameter, and when you do the math, this ratio of drag to friction falls out.”

    Drag and drop

    Their work showed that the Shields parameter — which predicts how much sediment is transported — can be modified to include not just size and density, but also grain shape, and furthermore, that a grain’s shape can be simply represented by a measure of the grain’s drag and its internal friction. The math seemed to make sense. But could the new formula predict how sediment actually flows?

    To answer this, the researchers ran a series of flume experiments, in which they pumped a current of water through an inclined tank with a floor covered in sediment. They ran tests with sediment of various grain shapes, including beds of round glass beads, smooth glass chips, rectangular prisms, and natural gravel. They measured the amount of sediment that was transported through the tank in a fixed amount of time. They then determined the effect of each sediment type’s grain shape by measuring the grains’ drag and friction.

    For drag, the researchers simply dropped individual grains down through a tank of water and gathered statistics for the time it took the grains of each sediment type to reach the bottom. For instance, a flatter grain type takes a longer time on average, and therefore has greater drag, than a round grain type of the same size and density.

    To measure friction, the team poured grains through a funnel and onto a circular tray, then measured the resulting pile’s angle, or slope — an indication of the grains’ friction, or ability to grip onto each other.

    For each sediment type, they then worked the corresponding shape’s drag and friction into the new formula, and found that it could indeed predict the bedload transport, or the amount of moving sediment that the researchers measured in their experiments.

    The team says the new model more accurately represents sediment flow. Going forward, scientists and engineers can use the model to better gauge how a river bed will respond to scenarios such as sudden flooding from severe weather or the removal of a dam.

    “If you were trying to make a prediction of how fast all that sediment will get evacuated after taking a dam out, and you’re wrong by a factor of three or five, that’s pretty bad,” Perron says. “Now we can do a lot better.”

    This research was supported, in part, by the U.S. Army Research Laboratory. More