More stories

  • in

    Q&A: A high-tech take on Wagner’s “Parsifal” opera

    The world-famous Bayreuth Festival in Germany, annually centered around the works of composer Richard Wagner, launched this summer on July 25 with a production that has been making headlines. Director Jay Scheib, an MIT faculty member, has created a version of Wagner’s celebrated opera “Parsifal” that is set in an apocalyptic future (rather than the original Medieval past), and uses augmented reality headset technology for a portion of the audience, among other visual effects. People using the headsets see hundreds of additional visuals, from fast-moving clouds to arrows being shot at them. The AR portion of the production was developed through a team led by designer and MIT Technical Instructor Joshua Higgason.

    The new “Parsifal” has engendered extensive media attention and discussion among opera followers and the viewing public. Five years in the making, it was developed with the encouragement of Bayreuth Festival general manager Katharina Wagner, Richard Wagner’s great-granddaughter. The production runs until Aug. 27, and can also be streamed on Stage+. Scheib, the Class of 1949 Professor in MIT’s Music and Theater Arts program, recently talked to MIT News about the project from Bayreuth.

    Q: Your production of “Parsifal” led off this year’s entire Bayreuth festival. How’s it going?

    A: From my point of view it’s going quite swimmingly. The leading German opera critics and the audiences have been super-supportive and Bayreuth makes it possible for a work to evolve … Given the complexity of the technical challenge of making an AR project function in an opera house, the bar was so high, it was a difficult challenge, and we’re really happy we found a way forward, a way to make it work, and a way to make it fit into an artistic process. I feel great.

    Q: You offer a new interpretation of “Parsifal,” and a new setting for it. What is it, and why did you choose to interpret it this way?

    A: One of the main themes in “Parsifal” is that the long-time king of this holy grail cult is wounded, and his wound will not heal. [With that in mind], we looked at what the world was like when the opera premiered in the late 19th century, around the time of what was known as the Great African Scramble, when Europe re-drew the map of Africa, largely based on resources, including mineral resources.

    Cobalt remains [the focus of] dirty mining practices in the Democratic Republic of Congo, and is a requirement for a lot of our electronic objects, in particular batteries. There are also these massive copper deposits discovered under a Buddhist temple in Afghanistan, and lithium under a sacred site in Nevada. We face an intense challenge in climate change, and the predictions are not good. Some of our solutions like electric cars require these materials, so they’re only solutions for some people, while others suffer [where minerals are being mined]. We started thinking about how wounds never heal, and when the prospect of creating a better world opens new wounds in other communities. … That became a theme. It also comes out of the time when we were making it, when Covid happened and George Floyd was murdered, which created an opportunity in the U.S. to start speaking very openly about wounds that have not healed.

    We set it in a largely post-human environment, where we didn’t succeed, and everything has collapsed. In the third act, there’s derelict mining equipment, and the holy water is this energy-giving force, but in fact it’s this lithium-ion pool, which gives us energy and then poisons us. That’s the theme we created.

    Q: What were your goals about integrating the AR technology into the opera, and how did you achieve that?

    A: First, I was working with my collaborator Joshua Higgason. No one had ever really done this before, so we just started researching whether it was possible. And most of the people we talked to said, “Don’t do it. It’s just not going to work.” Having always been a daredevil at heart, I was like, “Oh, come on, we can figure this out.”

    We were diligent in exploring the possibilities. We made multiple trips to Bayreuth and made these milimeter-accurate laser scans of the auditorium and the stage. We built a variety of models to see how to make AR work in a large environment, where 2,000 headsets could respond simultaneously. We built a team of animators and developers and programmers and designers, from Portugal to Cambridge to New York to Hungary, the UK, and a group in Germany. Josh led this team, and they got after it, but it took us the better part of two years to make it possible for an audience, some of whom don’t really use smartphones, to put on an AR headset and have it just work.

    I can’t even believe we did this. But it’s working.

    Q: In opera there’s hopefully a productive tension between tradition and innovation. How do you think about that when it comes to Wagner at Bayreuth?

    A: Innovation is the tradition at Bayreuth. Musically and scenographically. “Parsifal” was composed for this particular opera house, and I’m incredibly respectful of what this event is made for. We are trying to create a balanced and unified experience, between the scenic design and the AR and the lighting and the costume design, and create perfect moments of convergence where you really lose yourself in the environment. I believe wholly in the production and the performers are extraordinary. Truly, truly, truly extraordinary.

    Q: People have been focused on the issue of bringing AR to Bayreuth, but what has Bayreuth brought to you as a director?

    A: Working in Bayreuth has been an incredible experience. The level of intellectual integrity among the technicians is extraordinary. The amount of care and patience and curiosity and expertise in Bayreuth is off the charts. This community of artists is the greatest. … People come here because it’s an incredible meeting of the minds, and for that I’m immensely filled with gratitude every day I come into the rehearsal room. The conductor, Pablo Heras-Casado, and I have been working on this for several years. And the music is still first. We’re setting up technology not to overtake the music, but to support it, and visually amplify it.

    It must be said that Katharina Wagner has been one of the most powerfully supportive artistic directors I have ever worked with. I find it inspiring to witness her tenacity and vision in seeing all of this through, despite the hurdles. It’s been a great collaboration. That’s the essence: great collaboration. More

  • in

    How forests can cut carbon, restore ecosystems, and create jobs

    To limit the frequency and severity of droughts, wildfires, flooding, and other adverse consequences of climate change, nearly 200 countries committed to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius. According to the latest United Nations Intergovernmental Panel on Climate Change (IPCC) Report, achieving that goal will require both large-scale greenhouse gas (GHG) emissions reduction and removal of GHGs from the atmosphere.

    At present, the most efficient and scalable GHG-removal strategy is the massive planting of trees through reforestation or afforestation — a “natural climate solution” (NCS) that extracts atmospheric carbon dioxide through photosynthesis and soil carbon sequestration.

    Despite the potential of forestry-based NCS projects to address climate change, biodiversity loss, unemployment, and other societal needs — and their appeal to policymakers, funders, and citizens — they have yet to achieve critical mass, and often underperform due to a mix of interacting ecological, social, and financial constraints. To better understand these challenges and identify opportunities to overcome them, a team of researchers at Imperial College London and the MIT Joint Program on the Science and Policy of Global Change recently studied how environmental scientists, local stakeholders, and project funders perceive the risks and benefits of NCS projects, and how these perceptions impact project goals and performance. To that end, they surveyed and consulted with dozens of recognized experts and organizations spanning the fields of ecology, finance, climate policy, and social science.

    The team’s analysis, which appears in the journal Frontiers in Climate, found two main factors that have hindered the success of forestry-based NCS projects.

    First, the ambition — levels of carbon removal, ecosystem restoration, job creation, and other environmental and social targets — of selected NCS projects is limited by funders’ perceptions of their overall risk. Among other things, funders aim to minimize operational risk (e.g., Will newly planted trees survive and grow?), political risk (e.g., Just how secure is their access to the land where trees will be planted?); and reputational risk (e.g., Will the project be perceived as an exercise in “greenwashing,” or fall way short of its promised environmental and social benefits?). Funders seeking a financial return on their initial investment are also concerned about the dependability of complex monitoring, reporting, and verification methods used to quantify atmospheric carbon removal, biodiversity gains, and other metrics of project performance.

    Second, the environmental and social benefits of NCS projects are unlikely to be realized unless the local communities impacted by these projects are granted ownership over their implementation and outcomes. But while engaging with local communities is critical to project performance, it can be challenging both legally and financially to set up incentives (e.g., payment and other forms of compensation) to mobilize such engagement.

    “Many carbon offset projects raise legitimate concerns about their effectiveness,” says study lead author Bonnie Waring, a senior lecturer at the Grantham Institute on Climate Change and the Environment, Imperial College London. “However, if nature climate solution projects are done properly, they can help with sustainable development and empower local communities.”

    Drawing on surveys and consultations with NCS experts, stakeholders, and funders, the research team highlighted several recommendations on how to overcome key challenges faced by forestry-based NCS projects and boost their environmental and social performance.

    These recommendations include encouraging funders to evaluate projects based on robust internal governance, support from regional and national governments, secure land tenure, material benefits for local communities, and full participation of community members from across a spectrum of socioeconomic groups; improving the credibility and verifiability of project emissions reductions and related co-benefits; and maintaining an open dialogue and shared costs and benefits among those who fund, implement, and benefit from these projects.

    “Addressing climate change requires approaches that include emissions mitigation from economic activities paired with greenhouse gas reductions by natural ecosystems,” says Sergey Paltsev, a co-author of the study and deputy director of the MIT Joint Program. “Guided by these recommendations, we advocate for a proper scaling-up of NCS activities from project levels to help assure integrity of emissions reductions across entire countries.” More

  • in

    A new dataset of Arctic images will spur artificial intelligence research

    As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

    “This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment,” says Jo Kurucar, a researcher in Lincoln Laboratory’s AI Software Architectures and Algorithms Group, which led this project.

    As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

    Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

    The Healy is the USCG’s largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

    “Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, we’ve established ties that enabled the deployment of the CRISP system,” says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. “We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter.”

    The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.  

    The camera is mounted at the front of the ship’s fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

    “The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality,” says Michael Emily, the project’s IT systems administrator who traveled to Seattle for the install. Working with the ship’s crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren’t in fact accessible. “We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare,” Emily says.

    The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

    The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. “Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own,” Kurucar adds. “It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset.”

    On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

    Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

    Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: “I’m extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness.”

    Once the dataset is available, it will be free to download on the Lincoln Laboratory dataset website. More

  • in

    Helping the transportation sector adapt to a changing world

    After graduating from college, Nick Caros took a job as an engineer with a construction company, helping to manage the building of a new highway bridge right near where he grew up outside of Vancouver, British Columbia.  

    “I had a lot of friends that would use that new bridge to get to work,” Caros recalls. “They’d say, ‘You saved me like 20 minutes!’ That’s when I first realized that transportation could be a huge benefit to people’s lives.”

    Now a PhD candidate in the Urban Mobility Lab and the lead researcher for the MIT Transit Research Consortium, Caros works with seven transit agencies across the country to understand how workers’ transportation needs have changed as companies have adopted remote work policies.

    “Another cool thing about working on transportation is that everybody, even if they don’t engage with it on an academic level, has an opinion or wants to talk about it,” says Caros. “As soon as I mention I’ve worked with the T, they have something they want to talk about.”

    Caros is drawn to projects with social impact beyond saving his friends a few minutes during their commutes. He sees public transportation as a crucial component in combating climate change and is passionate about identifying and lowering the psychological barriers that prevent people around the world from taking advantage of their local transit systems.

    “The more I’ve learned about public transportation, the more I’ve come to realize it will play an essential part in decarbonizing urban transportation,” says Caros. “I want to continue working on these kinds of issues, like how we can make transportation more sustainable or promoting public transportation in places where it doesn’t exist or can be improved.”

    Caros says he doesn’t have a “transportation origin story,” like some of his peers who grew up in urban centers with robust public transit systems. As a child growing up in the Vancouver suburbs, he always enjoyed the outdoors, which were as close as his backyard. He chose to study engineering as an undergraduate at the University of British Columbia, fascinated by the hydroelectric dams that supply Vancouver with most of its power. But after two projects with the construction company, the second of which took him to Maryland to work on a fossil fuel project, he decided he needed a change.

    Not quite sure what he wanted to do next, Caros sought out the shortest master’s program he could find that interested him. That ended up being an 18-month master’s program in transportation planning and engineering at New York University. Initially intending to pursue the course-based program, Caros was soon offered the chance to be a research assistant in NYU’s Behavioral Urban Informatics, Logistics, and Transport Laboratory with Professor Joseph Chow. There, he worked to model an experimental transportation system of modular self-driving cars that could link and unlink with each other while in motion.

    “It was this really futuristic stuff,” says Caros. “It turned out to be a really cool project to work on because it’s kind of rare to have a blank-slate problem to try and solve. A lot of transportation engineering problems have largely been solved. We know how to make efficient and sustainable transportation systems; it’s just finding the political support and encouraging behavioral change that remains a challenge.”

    At NYU, Caros fell in love with research and the field of transportation. Later, he was drawn to MIT by its interdisciplinary PhD program that spans both urban studies and planning and civil engineering and the opportunity to work with Professor Jinhua Zhao.

    His research focuses on identifying “third places,” locations where some people go if their job gives them the flexibility to work remotely. Previously, transportation needs revolved around office spaces, typically located in city centers. With more people working from home, the first assumption is that transportation needs would decrease. But that’s not what Caros has found.

    “One major finding from our research is that people have changed where they’re going when they go to work,” says Caros. “A lot of people are working from home, but some are also working in other places, like coffee shops or co-working spaces. And these third places are not evenly distributed in Boston.”

    Identifying the concentration of these third places and what locations would benefit from them is the core of Caros’ dissertation. He’s building an algorithm that identifies ideal locations to build more shared workplaces based on both economic and social factors. Caros seeks to answer how you can minimize travel time across the board while leaving room for the spontaneous social interactions that drive a city’s productivity. His research is sponsored by seven of the largest transit agencies in the United States, who are members of the MIT Transit Research Consortium. Rather than a single agency sponsoring a single specific project, funding is pooled to tackle projects that address general topics that can apply to multiple cities.

    These kinds of problems require a multidisciplinary approach that appeals to Caros. Even when diving into the technical details of a solution, he always keeps the bigger picture in mind. He is certain that changing people’s views of public transportation will be crucial in the fight against climate change.

    “A lot of it is not necessarily engineering, but understanding what the motivations of people are,” says Caros. “Transportation is a leading sector for carbon emissions in the U.S., and so figuring out what makes people tick and how you can get them to ride public transit more, for example, would help to reduce the overall carbon cost.”

    Following the completion of his degree, Caros will join the Organization for Economic Cooperation and Development. He already spent six months at its Paris headquarters as an intern during a leave from MIT, something his lab encourages all of its students to do. Last fall, he worked on drafting policy guidelines for new mobility services such as vehicle-share scooters, and addressing transportation equity issues in Ghana. Plus, living in Paris gave him the opportunity to practice his French. Growing up in Canada, he attended a French immersion school, and his internship offered his first opportunity to use the language outside of an academic context.

    Looking forward, Caros hopes to keep tackling projects that promote sustainable public transportation. There is an urgency in getting ahead of the curve, especially in cities experiencing rapid growth.

    “You kind of get locked in,” says Caros. “It becomes much harder to build sustainable transportation systems after the fact. But it’s really just a geometry problem. Trains and buses are a way more efficient way to move people using the same amount of space as private cars.” More

  • in

    Harnessing synthetic biology to make sustainable alternatives to petroleum products

    Reducing our reliance on fossil fuels is going to require a transformation in the way we make things. That’s because the hydrocarbons found in fuels like crude oil, natural gas, and coal are also in everyday items like plastics, clothing, and cosmetics.

    Now Visolis, founded by Deepak Dugar SM ’11, MBA ’13, PhD ’13, is combining synthetic biology with chemical catalysis to reinvent the way the world makes things — and reducing gigatons of greenhouse gas emissions in the process.

    The company — which uses a microbe to ferment biomass waste like wood chips and create a molecular building block called mevalonic acid — is more sustainably producing everything from car tires and cosmetics to aviation fuels by tweaking the chemical processes involved to make different byproducts.

    “We started with [the rubber component] isoprene as the main molecule we produce [from mevalonic acid], but we’ve expanded our platform with this unique combination of chemistry and biology that allows us to decarbonize multiple supply chains very rapidly and efficiently,” Dugar explains. “Imagine carbon-negative yoga pants. We can make that happen. Tires can be carbon-negative, personal care can lower its footprint — and we’re already selling into personal care. So in everything from personal care to apparel to industrial goods, our platform is enabling decarbonization of manufacturing.”

    “Carbon-negative” is a term Dugar uses a lot. Visolis has already partnered with some of the world’s largest consumers of isoprene, a precursor to rubber, and now Dugar wants to prove out the company’s process in other emissions-intensive industries.

    “Our process is carbon-negative because plants are taking CO2 from the air, and we take that plant matter and process it into something structural, like synthetic rubber, which is used for things like roofing, tires, and other applications,” Dugar explains. “Generally speaking, most of that material at the end of its life gets recycled, for example to tarmac or road, or, worst-case scenario, it ends up in a landfill, so the CO2 that was captured by the plant matter stays captured in the materials. That means our production can be carbon-negative depending on the emissions of the production process. That allows us to not only reduce climate change but start reversing it. That was an insight I had about 10 years ago at MIT.”

    Finding a path

    For his PhD, Dugar explored the economics of using microbes to make high-octane gas additives. He also took classes at the MIT Sloan School of Management on sustainability and entrepreneurship, including the particularly influential course 15.366 (Climate and Energy Ventures). The experience inspired him to start a company.

    “I wanted to work on something that could have the largest climate impact, and that was replacing petroleum,” Dugar says. “It was about replacing petroleum not just as a fuel but as a material as well. Everything from the clothes we wear to the furniture we sit on is often made using petroleum.”

    By analyzing recent advances in synthetic biology and making some calculations from first principles, Dugar decided that a microbial approach to cleaning up the production of rubber was viable. He participated in the MIT Clean Energy Prize and worked with others at MIT to prove out the idea. But it was still just an idea. After graduation, he took a consulting job at a large company, spending his nights and weekends renting lab space to continue trying to make his sustainable rubber a reality.

    After 18 months, by applying engineering concepts like design-for-scale to synthetic biology, Dugar was able to develop a microbe that met 80 percent of his criteria for making an intermediate molecule called mevalonic acid. From there, he developed a chemical catalysis process that converted mevalonic acid to isoprene, the main component of natural rubber. Visolis has since patented other chemical conversion processes that turn mevalonic acid to aviation fuel, polymers, and fabrics.

    Dugar left his consulting job in 2014 and was awarded a fellowship to work on Visolis full-time at the Lawrence Berkeley National Lab via Activate, an incubator empowering scientists to reinvent the world.

    From rubber to jet fuels

    Today, in addition to isoprene, Visolis is selling skin care products through the brand Ameva Bio, which produces mevalonic acid-based creams by recycling plant byproducts created in other processes. The company offers refillable bottles and even offsets emissions from the shipping of its products.

    “We are working throughout the supply chain,” Dugar says. “It made sense to clean up the isoprene part of the rubber supply chain rather than the entire supply chain. But we’re also producing molecules for skin that are better for you, so you can put something much more sustainable and healthier on your body instead of petrochemicals. We launched Ameva to demonstrate that brands can leverage synthetic biology to turn carbon-negative ingredients into high-performing products.”

    Visolis is also starting the process of gaining regulatory approval for its sustainable aviation fuel, which Dugar believes could have the biggest climate impact of any of the company’s products by cleaning up the production of fuels for commercial flight.

    “We’re working with leading companies to help them decarbonize aviation” Dugar says. “If you look at the lifecycle of fuel, the current petroleum-based approach is we dig out hydrocarbons from the ground and burn it, emitting CO2 into the air. In our process, we take plant matter, which affixes to CO2 and captures renewable energy in those bonds, and then we transfer that into aviation fuel plus things like synthetic rubber, yoga pants, and other things that continue to hold the carbon. So, our factories can still operate at net zero carbon emissions.”

    Visolis is already generating millions of dollars in revenue, and Dugar says his goal is to scale the company rapidly now that its platform molecule has been validated.

    “We have been scaling our technology by 10 times every two to three years and are now looking to increase deployment of our technology at the same pace, which is very exciting.” Dugar says. “If you extrapolate that, very quickly you get to massive impact. That’s our goal.” More

  • in

    System tracks movement of food through global humanitarian supply chain

    Although more than enough food is produced to feed everyone in the world, as many as 828 million people face hunger today. Poverty, social inequity, climate change, natural disasters, and political conflicts all contribute to inhibiting access to food. For decades, the U.S. Agency for International Development (USAID) Bureau for Humanitarian Assistance (BHA) has been a leader in global food assistance, supplying millions of metric tons of food to recipients worldwide. Alleviating hunger — and the conflict and instability hunger causes — is critical to U.S. national security.

    But BHA is only one player within a large, complex supply chain in which food gets handed off between more than 100 partner organizations before reaching its final destination. Traditionally, the movement of food through the supply chain has been a black-box operation, with stakeholders largely out of the loop about what happens to the food once it leaves their custody. This lack of direct visibility into operations is due to siloed data repositories, insufficient data sharing among stakeholders, and different data formats that operators must manually sort through and standardize. As a result, accurate, real-time information — such as where food shipments are at any given time, which shipments are affected by delays or food recalls, and when shipments have arrived at their final destination — is lacking. A centralized system capable of tracing food along its entire journey, from manufacture through delivery, would enable a more effective humanitarian response to food-aid needs.

    In 2020, a team from MIT Lincoln Laboratory began engaging with BHA to create an intelligent dashboard for their supply-chain operations. This dashboard brings together the expansive food-aid datasets from BHA’s existing systems into a single platform, with tools for visualizing and analyzing the data. When the team started developing the dashboard, they quickly realized the need for considerably more data than BHA had access to.

    “That’s where traceability comes in, with each handoff partner contributing key pieces of information as food moves through the supply chain,” explains Megan Richardson, a researcher in the laboratory’s Humanitarian Assistance and Disaster Relief Systems Group.

    Richardson and the rest of the team have been working with BHA and their partners to scope, build, and implement such an end-to-end traceability system. This system consists of serialized, unique identifiers (IDs) — akin to fingerprints — that are assigned to individual food items at the time they are produced. These individual IDs remain linked to items as they are aggregated along the supply chain, first domestically and then internationally. For example, individually tagged cans of vegetable oil get packaged into cartons; cartons are placed onto pallets and transported via railway and truck to warehouses; pallets are loaded onto shipping containers at U.S. ports; and pallets are unloaded and cartons are unpackaged overseas.

    With a trace

    Today, visibility at the single-item level doesn’t exist. Most suppliers mark pallets with a lot number (a lot is a batch of items produced in the same run), but this is for internal purposes (i.e., to track issues stemming back to their production supply, like over-enriched ingredients or machinery malfunction), not data sharing. So, organizations know which supplier lot a pallet and carton are associated with, but they can’t track the unique history of an individual carton or item within that pallet. As the lots move further downstream toward their final destination, they are often mixed with lots from other productions, and possibly other commodity types altogether, because of space constraints. On the international side, such mixing and the lack of granularity make it difficult to quickly pull commodities out of the supply chain if food safety concerns arise. Current response times can span several months.

    “Commodities are grouped differently at different stages of the supply chain, so it is logical to track them in those groupings where needed,” Richardson says. “Our item-level granularity serves as a form of Rosetta Stone to enable stakeholders to efficiently communicate throughout these stages. We’re trying to enable a way to track not only the movement of commodities, including through their lot information, but also any problems arising independent of lot, like exposure to high humidity levels in a warehouse. Right now, we have no way to associate commodities with histories that may have resulted in an issue.”

    “You can now track your checked luggage across the world and the fish on your dinner plate,” adds Brice MacLaren, also a researcher in the laboratory’s Humanitarian Assistance and Disaster Relief Systems Group. “So, this technology isn’t new, but it’s new to BHA as they evolve their methodology for commodity tracing. The traceability system needs to be versatile, working across a wide variety of operators who take custody of the commodity along the supply chain and fitting into their existing best practices.”

    As food products make their way through the supply chain, operators at each receiving point would be able to scan these IDs via a Lincoln Laboratory-developed mobile application (app) to indicate a product’s current location and transaction status — for example, that it is en route on a particular shipping container or stored in a certain warehouse. This information would get uploaded to a secure traceability server. By scanning a product, operators would also see its history up until that point.   

    Hitting the mark

    At the laboratory, the team tested the feasibility of their traceability technology, exploring different ways to mark and scan items. In their testing, they considered barcodes and radio-frequency identification (RFID) tags and handheld and fixed scanners. Their analysis revealed 2D barcodes (specifically data matrices) and smartphone-based scanners were the most feasible options in terms of how the technology works and how it fits into existing operations and infrastructure.

    “We needed to come up with a solution that would be practical and sustainable in the field,” MacLaren says. “While scanners can automatically read any RFID tags in close proximity as someone is walking by, they can’t discriminate exactly where the tags are coming from. RFID is expensive, and it’s hard to read commodities in bulk. On the other hand, a phone can scan a barcode on a particular box and tell you that code goes with that box. The challenge then becomes figuring out how to present the codes for people to easily scan without significantly interrupting their usual processes for handling and moving commodities.” 

    As the team learned from partner representatives in Kenya and Djibouti, offloading at the ports is a chaotic, fast operation. At manual warehouses, porters fling bags over their shoulders or stack cartons atop their heads any which way they can and run them to a drop point; at bagging terminals, commodities come down a conveyor belt and land this way or that way. With this variability comes several questions: How many barcodes do you need on an item? Where should they be placed? What size should they be? What will they cost? The laboratory team is considering these questions, keeping in mind that the answers will vary depending on the type of commodity; vegetable oil cartons will have different specifications than, say, 50-kilogram bags of wheat or peas.

    Leaving a mark

    Leveraging results from their testing and insights from international partners, the team has been running a traceability pilot evaluating how their proposed system meshes with real-world domestic and international operations. The current pilot features a domestic component in Houston, Texas, and an international component in Ethiopia, and focuses on tracking individual cartons of vegetable oil and identifying damaged cans. The Ethiopian team with Catholic Relief Services recently received a container filled with pallets of uniquely barcoded cartons of vegetable oil cans (in the next pilot, the cans will be barcoded, too). They are now scanning items and collecting data on product damage by using smartphones with the laboratory-developed mobile traceability app on which they were trained. 

    “The partners in Ethiopia are comparing a couple lid types to determine whether some are more resilient than others,” Richardson says. “With the app — which is designed to scan commodities, collect transaction data, and keep history — the partners can take pictures of damaged cans and see if a trend with the lid type emerges.”

    Next, the team will run a series of pilots with the World Food Program (WFP), the world’s largest humanitarian organization. The first pilot will focus on data connectivity and interoperability, and the team will engage with suppliers to directly print barcodes on individual commodities instead of applying barcode labels to packaging, as they did in the initial feasibility testing. The WFP will provide input on which of their operations are best suited for testing the traceability system, considering factors like the network bandwidth of WFP staff and local partners, the commodity types being distributed, and the country context for scanning. The BHA will likely also prioritize locations for system testing.

    “Our goal is to provide an infrastructure to enable as close to real-time data exchange as possible between all parties, given intermittent power and connectivity in these environments,” MacLaren says.

    In subsequent pilots, the team will try to integrate their approach with existing systems that partners rely on for tracking procurements, inventory, and movement of commodities under their custody so that this information is automatically pushed to the traceability server. The team also hopes to add a capability for real-time alerting of statuses, like the departure and arrival of commodities at a port or the exposure of unclaimed commodities to the elements. Real-time alerts would enable stakeholders to more efficiently respond to food-safety events. Currently, partners are forced to take a conservative approach, pulling out more commodities from the supply chain than are actually suspect, to reduce risk of harm. Both BHA and WHP are interested in testing out a food-safety event during one of the pilots to see how the traceability system works in enabling rapid communication response.

    To implement this technology at scale will require some standardization for marking different commodity types as well as give and take among the partners on best practices for handling commodities. It will also require an understanding of country regulations and partner interactions with subcontractors, government entities, and other stakeholders.

    “Within several years, I think it’s possible for BHA to use our system to mark and trace all their food procured in the United States and sent internationally,” MacLaren says.

    Once collected, the trove of traceability data could be harnessed for other purposes, among them analyzing historical trends, predicting future demand, and assessing the carbon footprint of commodity transport. In the future, a similar traceability system could scale for nonfood items, including medical supplies distributed to disaster victims, resources like generators and water trucks localized in emergency-response scenarios, and vaccines administered during pandemics. Several groups at the laboratory are also interested in such a system to track items such as tools deployed in space or equipment people carry through different operational environments.

    “When we first started this program, colleagues were asking why the laboratory was involved in simple tasks like making a dashboard, marking items with barcodes, and using hand scanners,” MacLaren says. “Our impact here isn’t about the technology; it’s about providing a strategy for coordinated food-aid response and successfully implementing that strategy. Most importantly, it’s about people getting fed.” More

  • in

    Study: The ocean’s color is changing as a consequence of climate change

    The ocean’s color has changed significantly over the last 20 years, and the global trend is likely a consequence of human-induced climate change, report scientists at MIT, the National Oceanography Center in the U.K., and elsewhere.  

    In a study appearing today in Nature, the team writes that they have detected changes in ocean color over the past two decades that cannot be explained by natural, year-to-year variability alone. These color shifts, though subtle to the human eye, have occurred over 56 percent of the world’s oceans — an expanse that is larger than the total land area on Earth.

    In particular, the researchers found that tropical ocean regions near the equator have become steadily greener over time. The shift in ocean color indicates that ecosystems within the surface ocean must also be changing, as the color of the ocean is a literal reflection of the organisms and materials in its waters.

    At this point, the researchers cannot say how exactly marine ecosystems are changing to reflect the shifting color. But they are pretty sure of one thing: Human-induced climate change is likely the driver.

    “I’ve been running simulations that have been telling me for years that these changes in ocean color are going to happen,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Center for Global Change Science. “To actually see it happening for real is not surprising, but frightening. And these changes are consistent with man-induced changes to our climate.”

    “This gives additional evidence of how human activities are affecting life on Earth over a huge spatial extent,” adds lead author B. B. Cael PhD ’19 of the National Oceanography Center in Southampton, U.K. “It’s another way that humans are affecting the biosphere.”

    The study’s co-authors also include Stephanie Henson of the National Oceanography Center, Kelsey Bisson at Oregon State University, and Emmanuel Boss of the University of Maine.

    Above the noise

    The ocean’s color is a visual product of whatever lies within its upper layers. Generally, waters that are deep blue reflect very little life, whereas greener waters indicate the presence of ecosystems, and mainly phytoplankton — plant-like microbes that are abundant in upper ocean and that contain the green pigment chlorophyll. The pigment helps plankton harvest sunlight, which they use to capture carbon dioxide from the atmosphere and convert it into sugars.

    Phytoplankton are the foundation of the marine food web that sustains progressively more complex organisms, on up to krill, fish, and seabirds and marine mammals. Phytoplankton are also a powerful muscle in the ocean’s ability to capture and store carbon dioxide. Scientists are therefore keen to monitor phytoplankton across the surface oceans and to see how these essential communities might respond to climate change. To do so, scientists have tracked changes in chlorophyll, based on the ratio of how much blue versus green light is reflected from the ocean surface, which can be monitored from space

    But around a decade ago, Henson, who is a co-author of the current study, published a paper with others, which showed that, if scientists were tracking chlorophyll alone, it would take at least 30 years of continuous monitoring to detect any trend that was driven specifically by climate change. The reason, the team argued, was that the large, natural variations in chlorophyll from year to year would overwhelm any anthropogenic influence on chlorophyll concentrations. It would therefore take several decades to pick out a meaningful, climate-change-driven signal amid the normal noise.

    In 2019, Dutkiewicz and her colleagues published a separate paper, showing through a new model that the natural variation in other ocean colors is much smaller compared to that of chlorophyll. Therefore, any signal of climate-change-driven changes should be easier to detect over the smaller, normal variations of other ocean colors. They predicted that such changes should be apparent within 20, rather than 30 years of monitoring.

    “So I thought, doesn’t it make sense to look for a trend in all these other colors, rather than in chlorophyll alone?” Cael says. “It’s worth looking at the whole spectrum, rather than just trying to estimate one number from bits of the spectrum.”

     The power of seven

    In the current study, Cael and the team analyzed measurements of ocean color taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, which has been monitoring ocean color for 21 years. MODIS takes measurements in seven visible wavelengths, including the two colors researchers traditionally use to estimate chlorophyll.

    The differences in color that the satellite picks up are too subtle for human eyes to differentiate. Much of the ocean appears blue to our eye, whereas the true color may contain a mix of subtler wavelengths, from blue to green and even red.

    Cael carried out a statistical analysis using all seven ocean colors measured by the satellite from 2002 to 2022 together. He first looked at how much the seven colors changed from region to region during a given year, which gave him an idea of their natural variations. He then zoomed out to see how these annual variations in ocean color changed over a longer stretch of two decades. This analysis turned up a clear trend, above the normal year-to-year variability.

    To see whether this trend is related to climate change, he then looked to Dutkiewicz’s model from 2019. This model simulated the Earth’s oceans under two scenarios: one with the addition of greenhouse gases, and the other without it. The greenhouse-gas model predicted that a significant trend should show up within 20 years and that this trend should cause changes to ocean color in about 50 percent of the world’s surface oceans — almost exactly what Cael found in his analysis of real-world satellite data.

    “This suggests that the trends we observe are not a random variation in the Earth system,” Cael says. “This is consistent with anthropogenic climate change.”

    The team’s results show that monitoring ocean colors beyond chlorophyll could give scientists a clearer, faster way to detect climate-change-driven changes to marine ecosystems.

    “The color of the oceans has changed,” Dutkiewicz says. “And we can’t say how. But we can say that changes in color reflect changes in plankton communities, that will impact everything that feeds on plankton. It will also change how much the ocean will take up carbon, because different types of plankton have different abilities to do that. So, we hope people take this seriously. It’s not only models that are predicting these changes will happen. We can now see it happening, and the ocean is changing.”

    This research was supported, in part, by NASA. More

  • in

    Studying rivers from worlds away

    Rivers have flowed on two other worlds in the solar system besides Earth: Mars, where dry tracks and craters are all that’s left of ancient rivers and lakes, and Titan, Saturn’s largest moon, where rivers of liquid methane still flow today.

    A new technique developed by MIT geologists allows scientists to see how intensely rivers used to flow on Mars, and how they currently flow on Titan. The method uses satellite observations to estimate the rate at which rivers move fluid and sediment downstream.

    Applying their new technique, the MIT team calculated how fast and deep rivers were in certain regions on Mars more than 1 billion years ago. They also made similar estimates for currently active rivers on Titan, even though the moon’s thick atmosphere and distance from Earth make it harder to explore, with far fewer available images of its surface than those of Mars.

    “What’s exciting about Titan is that it’s active. With this technique, we have a method to make real predictions for a place where we won’t get more data for a long time,” says Taylor Perron, the Cecil and Ida Green Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “And on Mars, it gives us a time machine, to take the rivers that are dead now and get a sense of what they were like when they were actively flowing.”

    Perron and his colleagues have published their results today in the Proceedings of the National Academy of Sciences. Perron’s MIT co-authors are first author Samuel Birch, Paul Corlies, and Jason Soderblom, with Rose Palermo and Andrew Ashton of the Woods Hole Oceanographic Institution (WHOI), Gary Parker of the University of Illinois at Urbana-Champaign, and collaborators from the University of California at Los Angeles, Yale University, and Cornell University.

    River math

    The team’s study grew out of Perron and Birch’s puzzlement over Titan’s rivers. The images taken by NASA’s Cassini spacecraft have shown a curious lack of fan-shaped deltas at the mouths of most of the moon’s rivers, contrary to many rivers on Earth. Could it be that Titan’s rivers don’t carry enough flow or sediment to build deltas?

    The group built on the work of co-author Gary Parker, who in the 2000s developed a series of mathematical equations to describe river flow on Earth. Parker had studied measurements of rivers taken directly in the field by others. From these data, he found there were certain universal relationships between a river’s physical dimensions — its width, depth, and slope — and the rate at which it flowed. He drew up equations to describe these relationships mathematically, accounting for other variables such as the gravitational field acting on the river, and the size and density of the sediment being pushed along a river’s bed.

    “This means that rivers with different gravity and materials should follow similar relationships,” Perron says. “That opened up a possibility to apply this to other planets too.”

    Getting a glimpse

    On Earth, geologists can make field measurements of a river’s width, slope, and average sediment size, all of which can be fed into Parker’s equations to accurately predict a river’s flow rate, or how much water and sediment it can move downstream. But for rivers on other planets, measurements are more limited, and largely based on images and elevation measurements collected by remote satellites. For Mars, multiple orbiters have taken high-resolution images of the planet. For Titan, views are few and far between.

    Birch realized that any estimate of river flow on Mars or Titan would have to be based on the few characteristics that can be measured from remote images and topography — namely, a river’s width and slope. With some algebraic tinkering, he adapted Parker’s equations to work only with width and slope inputs. He then assembled data from 491 rivers on Earth, tested the modified equations on these rivers, and found that the predictions based solely on each river’s width and slope were accurate.

    Then, he applied the equations to Mars, and specifically, to the ancient rivers leading into Gale and Jezero Craters, both of which are thought to have been water-filled lakes billions of years ago. To predict the flow rate of each river, he plugged into the equations Mars’ gravity, and estimates of each river’s width and slope, based on images and elevation measurements taken by orbiting satellites.

    From their predictions of flow rate, the team found that rivers likely flowed for at least 100,000 years at Gale Crater and at least 1 million years at Jezero Crater — long enough to have possibly supported life. They were also able to compare their predictions of the average size of sediment on each river’s bed with actual field measurements of Martian grains near each river, taken by NASA’s Curiosity and Perseverance rovers. These few field measurements allowed the team to check that their equations, applied on Mars, were accurate.

    The team then took their approach to Titan. They zeroed in on two locations where river slopes can be measured, including a river that flows into a lake the size of Lake Ontario. This river appears to form a delta as it feeds into the lake. However, the delta is one of only a few thought to exist on the moon — nearly every viewable river flowing into a lake mysteriously lacks a delta. The team also applied their method to one of these other delta-less rivers.

    They calculated both rivers’ flow and found that they may be comparable to some of the biggest rivers on Earth, with deltas estimated to have a flow rate as large as the Mississippi. Both rivers should move enough sediment to build up deltas. Yet, most rivers on Titan lack the fan-shaped deposits. Something else must be at work to explain this lack of river deposits.

    In another finding, the team calculated that rivers on Titan should be wider and have a gentler slope than rivers carrying the same flow on Earth or Mars. “Titan is the most Earth-like place,” Birch says. ”We’ve only gotten a glimpse of it. There’s so much more that we know is down there, and this remote technique is pushing us a little closer.”

    This research was supported, in part, by NASA and the Heising-Simons Foundation. More