More stories

  • in

    A new dataset of Arctic images will spur artificial intelligence research

    As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

    “This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment,” says Jo Kurucar, a researcher in Lincoln Laboratory’s AI Software Architectures and Algorithms Group, which led this project.

    As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

    Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

    The Healy is the USCG’s largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

    “Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, we’ve established ties that enabled the deployment of the CRISP system,” says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. “We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter.”

    The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.  

    The camera is mounted at the front of the ship’s fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

    “The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality,” says Michael Emily, the project’s IT systems administrator who traveled to Seattle for the install. Working with the ship’s crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren’t in fact accessible. “We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare,” Emily says.

    The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

    The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. “Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own,” Kurucar adds. “It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset.”

    On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

    Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

    Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: “I’m extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness.”

    Once the dataset is available, it will be free to download on the Lincoln Laboratory dataset website. More

  • in

    System tracks movement of food through global humanitarian supply chain

    Although more than enough food is produced to feed everyone in the world, as many as 828 million people face hunger today. Poverty, social inequity, climate change, natural disasters, and political conflicts all contribute to inhibiting access to food. For decades, the U.S. Agency for International Development (USAID) Bureau for Humanitarian Assistance (BHA) has been a leader in global food assistance, supplying millions of metric tons of food to recipients worldwide. Alleviating hunger — and the conflict and instability hunger causes — is critical to U.S. national security.

    But BHA is only one player within a large, complex supply chain in which food gets handed off between more than 100 partner organizations before reaching its final destination. Traditionally, the movement of food through the supply chain has been a black-box operation, with stakeholders largely out of the loop about what happens to the food once it leaves their custody. This lack of direct visibility into operations is due to siloed data repositories, insufficient data sharing among stakeholders, and different data formats that operators must manually sort through and standardize. As a result, accurate, real-time information — such as where food shipments are at any given time, which shipments are affected by delays or food recalls, and when shipments have arrived at their final destination — is lacking. A centralized system capable of tracing food along its entire journey, from manufacture through delivery, would enable a more effective humanitarian response to food-aid needs.

    In 2020, a team from MIT Lincoln Laboratory began engaging with BHA to create an intelligent dashboard for their supply-chain operations. This dashboard brings together the expansive food-aid datasets from BHA’s existing systems into a single platform, with tools for visualizing and analyzing the data. When the team started developing the dashboard, they quickly realized the need for considerably more data than BHA had access to.

    “That’s where traceability comes in, with each handoff partner contributing key pieces of information as food moves through the supply chain,” explains Megan Richardson, a researcher in the laboratory’s Humanitarian Assistance and Disaster Relief Systems Group.

    Richardson and the rest of the team have been working with BHA and their partners to scope, build, and implement such an end-to-end traceability system. This system consists of serialized, unique identifiers (IDs) — akin to fingerprints — that are assigned to individual food items at the time they are produced. These individual IDs remain linked to items as they are aggregated along the supply chain, first domestically and then internationally. For example, individually tagged cans of vegetable oil get packaged into cartons; cartons are placed onto pallets and transported via railway and truck to warehouses; pallets are loaded onto shipping containers at U.S. ports; and pallets are unloaded and cartons are unpackaged overseas.

    With a trace

    Today, visibility at the single-item level doesn’t exist. Most suppliers mark pallets with a lot number (a lot is a batch of items produced in the same run), but this is for internal purposes (i.e., to track issues stemming back to their production supply, like over-enriched ingredients or machinery malfunction), not data sharing. So, organizations know which supplier lot a pallet and carton are associated with, but they can’t track the unique history of an individual carton or item within that pallet. As the lots move further downstream toward their final destination, they are often mixed with lots from other productions, and possibly other commodity types altogether, because of space constraints. On the international side, such mixing and the lack of granularity make it difficult to quickly pull commodities out of the supply chain if food safety concerns arise. Current response times can span several months.

    “Commodities are grouped differently at different stages of the supply chain, so it is logical to track them in those groupings where needed,” Richardson says. “Our item-level granularity serves as a form of Rosetta Stone to enable stakeholders to efficiently communicate throughout these stages. We’re trying to enable a way to track not only the movement of commodities, including through their lot information, but also any problems arising independent of lot, like exposure to high humidity levels in a warehouse. Right now, we have no way to associate commodities with histories that may have resulted in an issue.”

    “You can now track your checked luggage across the world and the fish on your dinner plate,” adds Brice MacLaren, also a researcher in the laboratory’s Humanitarian Assistance and Disaster Relief Systems Group. “So, this technology isn’t new, but it’s new to BHA as they evolve their methodology for commodity tracing. The traceability system needs to be versatile, working across a wide variety of operators who take custody of the commodity along the supply chain and fitting into their existing best practices.”

    As food products make their way through the supply chain, operators at each receiving point would be able to scan these IDs via a Lincoln Laboratory-developed mobile application (app) to indicate a product’s current location and transaction status — for example, that it is en route on a particular shipping container or stored in a certain warehouse. This information would get uploaded to a secure traceability server. By scanning a product, operators would also see its history up until that point.   

    Hitting the mark

    At the laboratory, the team tested the feasibility of their traceability technology, exploring different ways to mark and scan items. In their testing, they considered barcodes and radio-frequency identification (RFID) tags and handheld and fixed scanners. Their analysis revealed 2D barcodes (specifically data matrices) and smartphone-based scanners were the most feasible options in terms of how the technology works and how it fits into existing operations and infrastructure.

    “We needed to come up with a solution that would be practical and sustainable in the field,” MacLaren says. “While scanners can automatically read any RFID tags in close proximity as someone is walking by, they can’t discriminate exactly where the tags are coming from. RFID is expensive, and it’s hard to read commodities in bulk. On the other hand, a phone can scan a barcode on a particular box and tell you that code goes with that box. The challenge then becomes figuring out how to present the codes for people to easily scan without significantly interrupting their usual processes for handling and moving commodities.” 

    As the team learned from partner representatives in Kenya and Djibouti, offloading at the ports is a chaotic, fast operation. At manual warehouses, porters fling bags over their shoulders or stack cartons atop their heads any which way they can and run them to a drop point; at bagging terminals, commodities come down a conveyor belt and land this way or that way. With this variability comes several questions: How many barcodes do you need on an item? Where should they be placed? What size should they be? What will they cost? The laboratory team is considering these questions, keeping in mind that the answers will vary depending on the type of commodity; vegetable oil cartons will have different specifications than, say, 50-kilogram bags of wheat or peas.

    Leaving a mark

    Leveraging results from their testing and insights from international partners, the team has been running a traceability pilot evaluating how their proposed system meshes with real-world domestic and international operations. The current pilot features a domestic component in Houston, Texas, and an international component in Ethiopia, and focuses on tracking individual cartons of vegetable oil and identifying damaged cans. The Ethiopian team with Catholic Relief Services recently received a container filled with pallets of uniquely barcoded cartons of vegetable oil cans (in the next pilot, the cans will be barcoded, too). They are now scanning items and collecting data on product damage by using smartphones with the laboratory-developed mobile traceability app on which they were trained. 

    “The partners in Ethiopia are comparing a couple lid types to determine whether some are more resilient than others,” Richardson says. “With the app — which is designed to scan commodities, collect transaction data, and keep history — the partners can take pictures of damaged cans and see if a trend with the lid type emerges.”

    Next, the team will run a series of pilots with the World Food Program (WFP), the world’s largest humanitarian organization. The first pilot will focus on data connectivity and interoperability, and the team will engage with suppliers to directly print barcodes on individual commodities instead of applying barcode labels to packaging, as they did in the initial feasibility testing. The WFP will provide input on which of their operations are best suited for testing the traceability system, considering factors like the network bandwidth of WFP staff and local partners, the commodity types being distributed, and the country context for scanning. The BHA will likely also prioritize locations for system testing.

    “Our goal is to provide an infrastructure to enable as close to real-time data exchange as possible between all parties, given intermittent power and connectivity in these environments,” MacLaren says.

    In subsequent pilots, the team will try to integrate their approach with existing systems that partners rely on for tracking procurements, inventory, and movement of commodities under their custody so that this information is automatically pushed to the traceability server. The team also hopes to add a capability for real-time alerting of statuses, like the departure and arrival of commodities at a port or the exposure of unclaimed commodities to the elements. Real-time alerts would enable stakeholders to more efficiently respond to food-safety events. Currently, partners are forced to take a conservative approach, pulling out more commodities from the supply chain than are actually suspect, to reduce risk of harm. Both BHA and WHP are interested in testing out a food-safety event during one of the pilots to see how the traceability system works in enabling rapid communication response.

    To implement this technology at scale will require some standardization for marking different commodity types as well as give and take among the partners on best practices for handling commodities. It will also require an understanding of country regulations and partner interactions with subcontractors, government entities, and other stakeholders.

    “Within several years, I think it’s possible for BHA to use our system to mark and trace all their food procured in the United States and sent internationally,” MacLaren says.

    Once collected, the trove of traceability data could be harnessed for other purposes, among them analyzing historical trends, predicting future demand, and assessing the carbon footprint of commodity transport. In the future, a similar traceability system could scale for nonfood items, including medical supplies distributed to disaster victims, resources like generators and water trucks localized in emergency-response scenarios, and vaccines administered during pandemics. Several groups at the laboratory are also interested in such a system to track items such as tools deployed in space or equipment people carry through different operational environments.

    “When we first started this program, colleagues were asking why the laboratory was involved in simple tasks like making a dashboard, marking items with barcodes, and using hand scanners,” MacLaren says. “Our impact here isn’t about the technology; it’s about providing a strategy for coordinated food-aid response and successfully implementing that strategy. Most importantly, it’s about people getting fed.” More

  • in

    Cutting urban carbon emissions by retrofitting buildings

    To support the worldwide struggle to reduce carbon emissions, many cities have made public pledges to cut their carbon emissions in half by 2030, and some have promised to be carbon neutral by 2050. Buildings can be responsible for more than half a municipality’s carbon emissions. Today, new buildings are typically designed in ways that minimize energy use and carbon emissions. So attention focuses on cleaning up existing buildings.

    A decade ago, leaders in some cities took the first step in that process: They quantified their problem. Based on data from their utilities on natural gas and electricity consumption and standard pollutant-emission rates, they calculated how much carbon came from their buildings. They then adopted policies to encourage retrofits, such as adding insulation, switching to double-glazed windows, or installing rooftop solar panels. But will those steps be enough to meet their pledges?

    “In nearly all cases, cities have no clear plan for how they’re going to reach their goal,” says Christoph Reinhart, a professor in the Department of Architecture and director of the Building Technology Program. “That’s where our work comes in. We aim to help them perform analyses so they can say, ‘If we, as a community, do A, B, and C to buildings of a certain type within our jurisdiction, then we are going to get there.’”

    To support those analyses, Reinhart and a team in the MIT Sustainable Design Lab (SDL) — PhD candidate Zachary M. Berzolla SM ’21; former doctoral student Yu Qian Ang PhD ’22, now a research collaborator at the SDL; and former postdoc Samuel Letellier-Duchesne, now a senior building performance analyst at the international building engineering and consulting firm Introba — launched a publicly accessible website providing a series of simulation tools and a process for using them to determine the impacts of planned steps on a specific building stock. Says Reinhart: “The takeaway can be a clear technology pathway — a combination of building upgrades, renewable energy deployments, and other measures that will enable a community to reach its carbon-reduction goals for their built environment.”

    Analyses performed in collaboration with policymakers from selected cities around the world yielded insights demonstrating that reaching current goals will require more effort than city representatives and — in a few cases — even the research team had anticipated.

    Exploring carbon-reduction pathways

    The researchers’ approach builds on a physics-based “building energy model,” or BEM, akin to those that architects use to design high-performance green buildings. In 2013, Reinhart and his team developed a method of extending that concept to analyze a cluster of buildings. Based on publicly available geographic information system (GIS) data, including each building’s type, footprint, and year of construction, the method defines the neighborhood — including trees, parks, and so on — and then, using meteorological data, how the buildings will interact, the airflows among them, and their energy use. The result is an “urban building energy model,” or UBEM, for a neighborhood or a whole city.

    The website developed by the MIT team enables neighborhoods and cities to develop their own UBEM and to use it to calculate their current building energy use and resulting carbon emissions, and then how those outcomes would change assuming different retrofit programs or other measures being implemented or considered. “The website — UBEM.io — provides step-by-step instructions and all the simulation tools that a team will need to perform an analysis,” says Reinhart.

    The website starts by describing three roles required to perform an analysis: a local sustainability champion who is familiar with the municipality’s carbon-reduction efforts; a GIS manager who has access to the municipality’s urban datasets and maintains a digital model of the built environment; and an energy modeler — typically a hired consultant — who has a background in green building consulting and individual building energy modeling.

    The team begins by defining “shallow” and “deep” building retrofit scenarios. To explain, Reinhart offers some examples: “‘Shallow’ refers to things that just happen, like when you replace your old, failing appliances with new, energy-efficient ones, or you install LED light bulbs and weatherstripping everywhere,” he says. “‘Deep’ adds to that list things you might do only every 20 years, such as ripping out walls and putting in insulation or replacing your gas furnace with an electric heat pump.”

    Once those scenarios are defined, the GIS manager uploads to UBEM.io a dataset of information about the city’s buildings, including their locations and attributes such as geometry, height, age, and use (e.g., commercial, retail, residential). The energy modeler then builds a UBEM to calculate the energy use and carbon emissions of the existing building stock. Once that baseline is established, the energy modeler can calculate how specific retrofit measures will change the outcomes.

    Workshop to test-drive the method

    Two years ago, the MIT team set up a three-day workshop to test the website with sample users. Participants included policymakers from eight cities and municipalities around the world: namely, Braga (Portugal), Cairo (Egypt), Dublin (Ireland), Florianopolis (Brazil), Kiel (Germany), Middlebury (Vermont, United States), Montreal (Canada), and Singapore. Taken together, the cities represent a wide range of climates, socioeconomic demographics, cultures, governing structures, and sizes.

    Working with the MIT team, the participants presented their goals, defined shallow- and deep-retrofit scenarios for their city, and selected a limited but representative area for analysis — an approach that would speed up analyses of different options while also generating results valid for the city as a whole.

    They then performed analyses to quantify the impacts of their retrofit scenarios. Finally, they learned how best to present their findings — a critical part of the exercise. “When you do this analysis and bring it back to the people, you can say, ‘This is our homework over the next 30 years. If we do this, we’re going to get there,’” says Reinhart. “That makes you part of the community, so it’s a joint goal.”

    Sample results

    After the close of the workshop, Reinhart and his team confirmed their findings for each city and then added one more factor to the analyses: the state of the city’s electric grid. Several cities in the study had pledged to make their grid carbon-neutral by 2050. Including the grid in the analysis was therefore critical: If a building becomes all-electric and purchases its electricity from a carbon-free grid, then that building will be carbon neutral — even with no on-site energy-saving retrofits.

    The final analysis for each city therefore calculated the total kilograms of carbon dioxide equivalent emitted per square meter of floor space assuming the following scenarios: the baseline; shallow retrofit only; shallow retrofit plus a clean electricity grid; deep retrofit only; deep retrofit plus rooftop photovoltaic solar panels; and deep retrofit plus a clean electricity grid. (Note that “clean electricity grid” is based on the area’s most ambitious decarbonization target for their power grid.)

    The following paragraphs provide highlights of the analyses for three of the eight cities. Included are the city’s setting, emission-reduction goals, current and proposed measures, and calculations of how implementation of those measures would affect their energy use and carbon emissions.

    Singapore

    Singapore is generally hot and humid, and its building energy use is largely in the form of electricity for cooling. The city is dominated by high-rise buildings, so there’s not much space for rooftop solar installations to generate the needed electricity. Therefore, plans for decarbonizing the current building stock must involve retrofits. The shallow-retrofit scenario focuses on installing energy-efficient lighting and appliances. To those steps, the deep-retrofit scenario adds adopting a district cooling system. Singapore’s stated goals are to cut the baseline carbon emissions by about a third by 2030 and to cut it in half by 2050.

    The analysis shows that, with just the shallow retrofits, Singapore won’t achieve its 2030 goal. But with the deep retrofits, it should come close. Notably, decarbonizing the electric grid would enable Singapore to meet and substantially exceed its 2050 target assuming either retrofit scenario.

    Dublin

    Dublin has a mild climate with relatively comfortable summers but cold, humid winters. As a result, the city’s energy use is dominated by fossil fuels, in particular, natural gas for space heating and domestic hot water. The city presented just one target — a 40 percent reduction by 2030.

    Dublin has many neighborhoods made up of Georgian row houses, and, at the time of the workshop, the city already had a program in place encouraging groups of owners to insulate their walls. The shallow-retrofit scenario therefore focuses on weatherization upgrades (adding weatherstripping to windows and doors, insulating crawlspaces, and so on). To that list, the deep-retrofit scenario adds insulating walls and installing upgraded windows. The participants didn’t include electric heat pumps, as the city was then assessing the feasibility of expanding the existing district heating system.

    Results of the analyses show that implementing the shallow-retrofit scenario won’t enable Dublin to meet its 2030 target. But the deep-retrofit scenario will. However, like Singapore, Dublin could make major gains by decarbonizing its electric grid. The analysis shows that a decarbonized grid — with or without the addition of rooftop solar panels where possible — could more than halve the carbon emissions that remain in the deep-retrofit scenario. Indeed, a decarbonized grid plus electrification of the heating system by incorporating heat pumps could enable Dublin to meet a future net-zero target.

    Middlebury

    Middlebury, Vermont, has warm, wet summers and frigid winters. Like Dublin, its energy demand is dominated by natural gas for heating. But unlike Dublin, it already has a largely decarbonized electric grid with a high penetration of renewables.

    For the analysis, the Middlebury team chose to focus on an aging residential neighborhood similar to many that surround the city core. The shallow-retrofit scenario calls for installing heat pumps for space heating, and the deep-retrofit scenario adds improvements in building envelopes (the façade, roof, and windows). The town’s targets are a 40 percent reduction from the baseline by 2030 and net-zero carbon by 2050.

    Results of the analyses showed that implementing the shallow-retrofit scenario won’t achieve the 2030 target. The deep-retrofit scenario would get the city to the 2030 target but not to the 2050 target. Indeed, even with the deep retrofits, fossil fuel use remains high. The explanation? While both retrofit scenarios call for installing heat pumps for space heating, the city would continue to use natural gas to heat its hot water.

    Lessons learned

    For several policymakers, seeing the results of their analyses was a wake-up call. They learned that the strategies they had planned might not be sufficient to meet their stated goals — an outcome that could prove publicly embarrassing for them in the future.

    Like the policymakers, the researchers learned from the experience. Reinhart notes three main takeaways.

    First, he and his team were surprised to find how much of a building’s energy use and carbon emissions can be traced to domestic hot water. With Middlebury, for example, even switching from natural gas to heat pumps for space heating didn’t yield the expected effect: On the bar graphs generated by their analyses, the gray bars indicating carbon from fossil fuel use remained. As Reinhart recalls, “I kept saying, ‘What’s all this gray?’” While the policymakers talked about using heat pumps, they were still going to use natural gas to heat their hot water. “It’s just stunning that hot water is such a big-ticket item. It’s huge,” says Reinhart.

    Second, the results demonstrate the importance of including the state of the local electric grid in this type of analysis. “Looking at the results, it’s clear that if we want to have a successful energy transition, the building sector and the electric grid sector both have to do their homework,” notes Reinhart. Moreover, in many cases, reaching carbon neutrality by 2050 would require not only a carbon-free grid but also all-electric buildings.

    Third, Reinhart was struck by how different the bar graphs presenting results for the eight cities look. “This really celebrates the uniqueness of different parts of the world,” he says. “The physics used in the analysis is the same everywhere, but differences in the climate, the building stock, construction practices, electric grids, and other factors make the consequences of making the same change vary widely.”

    In addition, says Reinhart, “there are sometimes deeply ingrained conflicts of interest and cultural norms, which is why you cannot just say everybody should do this and do this.” For instance, in one case, the city owned both the utility and the natural gas it burned. As a result, the policymakers didn’t consider putting in heat pumps because “the natural gas was a significant source of municipal income, and they didn’t want to give that up,” explains Reinhart.

    Finally, the analyses quantified two other important measures: energy use and “peak load,” which is the maximum electricity demanded from the grid over a specific time period. Reinhart says that energy use “is probably mostly a plausibility check. Does this make sense?” And peak load is important because the utilities need to keep a stable grid.

    Middlebury’s analysis provides an interesting look at how certain measures could influence peak electricity demand. There, the introduction of electric heat pumps for space heating more than doubles the peak demand from buildings, suggesting that substantial additional capacity would have to be added to the grid in that region. But when heat pumps are combined with other retrofitting measures, the peak demand drops to levels lower than the starting baseline.

    The aftermath: An update

    Reinhart stresses that the specific results from the workshop provide just a snapshot in time; that is, where the cities were at the time of the workshop. “This is not the fate of the city,” he says. “If we were to do the same exercise today, we’d no doubt see a change in thinking, and the outcomes would be different.”

    For example, heat pumps are now familiar technology and have demonstrated their ability to handle even bitterly cold climates. And in some regions, they’ve become economically attractive, as the war in Ukraine has made natural gas both scarce and expensive. Also, there’s now awareness of the need to deal with hot water production.

    Reinhart notes that performing the analyses at the workshop did have the intended impact: It brought about change. Two years after the project had ended, most of the cities reported that they had implemented new policy measures or had expanded their analysis across their entire building stock. “That’s exactly what we want,” comments Reinhart. “This is not an academic exercise. It’s meant to change what people focus on and what they do.”

    Designing policies with socioeconomics in mind

    Reinhart notes a key limitation of the UBEM.io approach: It looks only at technical feasibility. But will the building owners be willing and able to make the energy-saving retrofits? Data show that — even with today’s incentive programs and subsidies — current adoption rates are only about 1 percent. “That’s way too low to enable a city to achieve its emission-reduction goals in 30 years,” says Reinhart. “We need to take into account the socioeconomic realities of the residents to design policies that are both effective and equitable.”

    To that end, the MIT team extended their UBEM.io approach to create a socio-techno-economic analysis framework that can predict the rate of retrofit adoption throughout a city. Based on census data, the framework creates a UBEM that includes demographics for the specific types of buildings in a city. Accounting for the cost of making a specific retrofit plus financial benefits from policy incentives and future energy savings, the model determines the economic viability of the retrofit package for representative households.

    Sample analyses for two Boston neighborhoods suggest that high-income households are largely ineligible for need-based incentives or the incentives are insufficient to prompt action. Lower-income households are eligible and could benefit financially over time, but they don’t act, perhaps due to limited access to information, a lack of time or capital, or a variety of other reasons.

    Reinhart notes that their work thus far “is mainly looking at technical feasibility. Next steps are to better understand occupants’ willingness to pay, and then to determine what set of federal and local incentive programs will trigger households across the demographic spectrum to retrofit their apartments and houses, helping the worldwide effort to reduce carbon emissions.”

    This work was supported by Shell through the MIT Energy Initiative. Zachary Berzolla was supported by the U.S. National Science Foundation Graduate Research Fellowship. Samuel Letellier-Duchesne was supported by the postdoctoral fellowship of the Natural Sciences and Engineering Research Council of Canada.

    This article appears in the Spring 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Study: The ocean’s color is changing as a consequence of climate change

    The ocean’s color has changed significantly over the last 20 years, and the global trend is likely a consequence of human-induced climate change, report scientists at MIT, the National Oceanography Center in the U.K., and elsewhere.  

    In a study appearing today in Nature, the team writes that they have detected changes in ocean color over the past two decades that cannot be explained by natural, year-to-year variability alone. These color shifts, though subtle to the human eye, have occurred over 56 percent of the world’s oceans — an expanse that is larger than the total land area on Earth.

    In particular, the researchers found that tropical ocean regions near the equator have become steadily greener over time. The shift in ocean color indicates that ecosystems within the surface ocean must also be changing, as the color of the ocean is a literal reflection of the organisms and materials in its waters.

    At this point, the researchers cannot say how exactly marine ecosystems are changing to reflect the shifting color. But they are pretty sure of one thing: Human-induced climate change is likely the driver.

    “I’ve been running simulations that have been telling me for years that these changes in ocean color are going to happen,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Center for Global Change Science. “To actually see it happening for real is not surprising, but frightening. And these changes are consistent with man-induced changes to our climate.”

    “This gives additional evidence of how human activities are affecting life on Earth over a huge spatial extent,” adds lead author B. B. Cael PhD ’19 of the National Oceanography Center in Southampton, U.K. “It’s another way that humans are affecting the biosphere.”

    The study’s co-authors also include Stephanie Henson of the National Oceanography Center, Kelsey Bisson at Oregon State University, and Emmanuel Boss of the University of Maine.

    Above the noise

    The ocean’s color is a visual product of whatever lies within its upper layers. Generally, waters that are deep blue reflect very little life, whereas greener waters indicate the presence of ecosystems, and mainly phytoplankton — plant-like microbes that are abundant in upper ocean and that contain the green pigment chlorophyll. The pigment helps plankton harvest sunlight, which they use to capture carbon dioxide from the atmosphere and convert it into sugars.

    Phytoplankton are the foundation of the marine food web that sustains progressively more complex organisms, on up to krill, fish, and seabirds and marine mammals. Phytoplankton are also a powerful muscle in the ocean’s ability to capture and store carbon dioxide. Scientists are therefore keen to monitor phytoplankton across the surface oceans and to see how these essential communities might respond to climate change. To do so, scientists have tracked changes in chlorophyll, based on the ratio of how much blue versus green light is reflected from the ocean surface, which can be monitored from space

    But around a decade ago, Henson, who is a co-author of the current study, published a paper with others, which showed that, if scientists were tracking chlorophyll alone, it would take at least 30 years of continuous monitoring to detect any trend that was driven specifically by climate change. The reason, the team argued, was that the large, natural variations in chlorophyll from year to year would overwhelm any anthropogenic influence on chlorophyll concentrations. It would therefore take several decades to pick out a meaningful, climate-change-driven signal amid the normal noise.

    In 2019, Dutkiewicz and her colleagues published a separate paper, showing through a new model that the natural variation in other ocean colors is much smaller compared to that of chlorophyll. Therefore, any signal of climate-change-driven changes should be easier to detect over the smaller, normal variations of other ocean colors. They predicted that such changes should be apparent within 20, rather than 30 years of monitoring.

    “So I thought, doesn’t it make sense to look for a trend in all these other colors, rather than in chlorophyll alone?” Cael says. “It’s worth looking at the whole spectrum, rather than just trying to estimate one number from bits of the spectrum.”

     The power of seven

    In the current study, Cael and the team analyzed measurements of ocean color taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, which has been monitoring ocean color for 21 years. MODIS takes measurements in seven visible wavelengths, including the two colors researchers traditionally use to estimate chlorophyll.

    The differences in color that the satellite picks up are too subtle for human eyes to differentiate. Much of the ocean appears blue to our eye, whereas the true color may contain a mix of subtler wavelengths, from blue to green and even red.

    Cael carried out a statistical analysis using all seven ocean colors measured by the satellite from 2002 to 2022 together. He first looked at how much the seven colors changed from region to region during a given year, which gave him an idea of their natural variations. He then zoomed out to see how these annual variations in ocean color changed over a longer stretch of two decades. This analysis turned up a clear trend, above the normal year-to-year variability.

    To see whether this trend is related to climate change, he then looked to Dutkiewicz’s model from 2019. This model simulated the Earth’s oceans under two scenarios: one with the addition of greenhouse gases, and the other without it. The greenhouse-gas model predicted that a significant trend should show up within 20 years and that this trend should cause changes to ocean color in about 50 percent of the world’s surface oceans — almost exactly what Cael found in his analysis of real-world satellite data.

    “This suggests that the trends we observe are not a random variation in the Earth system,” Cael says. “This is consistent with anthropogenic climate change.”

    The team’s results show that monitoring ocean colors beyond chlorophyll could give scientists a clearer, faster way to detect climate-change-driven changes to marine ecosystems.

    “The color of the oceans has changed,” Dutkiewicz says. “And we can’t say how. But we can say that changes in color reflect changes in plankton communities, that will impact everything that feeds on plankton. It will also change how much the ocean will take up carbon, because different types of plankton have different abilities to do that. So, we hope people take this seriously. It’s not only models that are predicting these changes will happen. We can now see it happening, and the ocean is changing.”

    This research was supported, in part, by NASA. More

  • in

    Studying rivers from worlds away

    Rivers have flowed on two other worlds in the solar system besides Earth: Mars, where dry tracks and craters are all that’s left of ancient rivers and lakes, and Titan, Saturn’s largest moon, where rivers of liquid methane still flow today.

    A new technique developed by MIT geologists allows scientists to see how intensely rivers used to flow on Mars, and how they currently flow on Titan. The method uses satellite observations to estimate the rate at which rivers move fluid and sediment downstream.

    Applying their new technique, the MIT team calculated how fast and deep rivers were in certain regions on Mars more than 1 billion years ago. They also made similar estimates for currently active rivers on Titan, even though the moon’s thick atmosphere and distance from Earth make it harder to explore, with far fewer available images of its surface than those of Mars.

    “What’s exciting about Titan is that it’s active. With this technique, we have a method to make real predictions for a place where we won’t get more data for a long time,” says Taylor Perron, the Cecil and Ida Green Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “And on Mars, it gives us a time machine, to take the rivers that are dead now and get a sense of what they were like when they were actively flowing.”

    Perron and his colleagues have published their results today in the Proceedings of the National Academy of Sciences. Perron’s MIT co-authors are first author Samuel Birch, Paul Corlies, and Jason Soderblom, with Rose Palermo and Andrew Ashton of the Woods Hole Oceanographic Institution (WHOI), Gary Parker of the University of Illinois at Urbana-Champaign, and collaborators from the University of California at Los Angeles, Yale University, and Cornell University.

    River math

    The team’s study grew out of Perron and Birch’s puzzlement over Titan’s rivers. The images taken by NASA’s Cassini spacecraft have shown a curious lack of fan-shaped deltas at the mouths of most of the moon’s rivers, contrary to many rivers on Earth. Could it be that Titan’s rivers don’t carry enough flow or sediment to build deltas?

    The group built on the work of co-author Gary Parker, who in the 2000s developed a series of mathematical equations to describe river flow on Earth. Parker had studied measurements of rivers taken directly in the field by others. From these data, he found there were certain universal relationships between a river’s physical dimensions — its width, depth, and slope — and the rate at which it flowed. He drew up equations to describe these relationships mathematically, accounting for other variables such as the gravitational field acting on the river, and the size and density of the sediment being pushed along a river’s bed.

    “This means that rivers with different gravity and materials should follow similar relationships,” Perron says. “That opened up a possibility to apply this to other planets too.”

    Getting a glimpse

    On Earth, geologists can make field measurements of a river’s width, slope, and average sediment size, all of which can be fed into Parker’s equations to accurately predict a river’s flow rate, or how much water and sediment it can move downstream. But for rivers on other planets, measurements are more limited, and largely based on images and elevation measurements collected by remote satellites. For Mars, multiple orbiters have taken high-resolution images of the planet. For Titan, views are few and far between.

    Birch realized that any estimate of river flow on Mars or Titan would have to be based on the few characteristics that can be measured from remote images and topography — namely, a river’s width and slope. With some algebraic tinkering, he adapted Parker’s equations to work only with width and slope inputs. He then assembled data from 491 rivers on Earth, tested the modified equations on these rivers, and found that the predictions based solely on each river’s width and slope were accurate.

    Then, he applied the equations to Mars, and specifically, to the ancient rivers leading into Gale and Jezero Craters, both of which are thought to have been water-filled lakes billions of years ago. To predict the flow rate of each river, he plugged into the equations Mars’ gravity, and estimates of each river’s width and slope, based on images and elevation measurements taken by orbiting satellites.

    From their predictions of flow rate, the team found that rivers likely flowed for at least 100,000 years at Gale Crater and at least 1 million years at Jezero Crater — long enough to have possibly supported life. They were also able to compare their predictions of the average size of sediment on each river’s bed with actual field measurements of Martian grains near each river, taken by NASA’s Curiosity and Perseverance rovers. These few field measurements allowed the team to check that their equations, applied on Mars, were accurate.

    The team then took their approach to Titan. They zeroed in on two locations where river slopes can be measured, including a river that flows into a lake the size of Lake Ontario. This river appears to form a delta as it feeds into the lake. However, the delta is one of only a few thought to exist on the moon — nearly every viewable river flowing into a lake mysteriously lacks a delta. The team also applied their method to one of these other delta-less rivers.

    They calculated both rivers’ flow and found that they may be comparable to some of the biggest rivers on Earth, with deltas estimated to have a flow rate as large as the Mississippi. Both rivers should move enough sediment to build up deltas. Yet, most rivers on Titan lack the fan-shaped deposits. Something else must be at work to explain this lack of river deposits.

    In another finding, the team calculated that rivers on Titan should be wider and have a gentler slope than rivers carrying the same flow on Earth or Mars. “Titan is the most Earth-like place,” Birch says. ”We’ve only gotten a glimpse of it. There’s so much more that we know is down there, and this remote technique is pushing us a little closer.”

    This research was supported, in part, by NASA and the Heising-Simons Foundation. More

  • in

    Chemists discover why photosynthetic light-harvesting is so efficient

    When photosynthetic cells absorb light from the sun, packets of energy called photons leap between a series of light-harvesting proteins until they reach the photosynthetic reaction center. There, cells convert the energy into electrons, which eventually power the production of sugar molecules.

    This transfer of energy through the light-harvesting complex occurs with extremely high efficiency: Nearly every photon of light absorbed generates an electron, a phenomenon known as near-unity quantum efficiency.

    A new study from MIT chemists offers a potential explanation for how proteins of the light-harvesting complex, also called the antenna, achieve that high efficiency. For the first time, the researchers were able to measure the energy transfer between light-harvesting proteins, allowing them to discover that the disorganized arrangement of these proteins boosts the efficiency of the energy transduction.

    “In order for that antenna to work, you need long-distance energy transduction. Our key finding is that the disordered organization of the light-harvesting proteins enhances the efficiency of that long-distance energy transduction,” says Gabriela Schlau-Cohen, an associate professor of chemistry at MIT and the senior author of the new study.

    MIT postdocs Dihao Wang and Dvir Harris and former MIT graduate student Olivia Fiebig PhD ’22 are the lead authors of the paper, which appears this week in the Proceedings of the National Academy of Sciences. Jianshu Cao, an MIT professor of chemistry, is also an author of the paper.

    Energy capture

    For this study, the MIT team focused on purple bacteria, which are often found in oxygen-poor aquatic environments and are commonly used as a model for studies of photosynthetic light-harvesting.

    Within these cells, captured photons travel through light-harvesting complexes consisting of proteins and light-absorbing pigments such as chlorophyll. Using ultrafast spectroscopy, a technique that uses extremely short laser pulses to study events that happen on timescales of femtoseconds to nanoseconds, scientists have been able to study how energy moves within a single one of these proteins. However, studying how energy travels between these proteins has proven much more challenging because it requires positioning multiple proteins in a controlled way.

    To create an experimental setup where they could measure how energy travels between two proteins, the MIT team designed synthetic nanoscale membranes with a composition similar to those of naturally occurring cell membranes. By controlling the size of these membranes, known as nanodiscs, they were able to control the distance between two proteins embedded within the discs.

    For this study, the researchers embedded two versions of the primary light-harvesting protein found in purple bacteria, known as LH2 and LH3, into their nanodiscs. LH2 is the protein that is present during normal light conditions, and LH3 is a variant that is usually expressed only during low light conditions.

    Using the cryo-electron microscope at the MIT.nano facility, the researchers could image their membrane-embedded proteins and show that they were positioned at distances similar to those seen in the native membrane. They were also able to measure the distances between the light-harvesting proteins, which were on the scale of 2.5 to 3 nanometers.

    Disordered is better

    Because LH2 and LH3 absorb slightly different wavelengths of light, it is possible to use ultrafast spectroscopy to observe the energy transfer between them. For proteins spaced closely together, the researchers found that it takes about 6 picoseconds for a photon of energy to travel between them. For proteins farther apart, the transfer takes up to 15 picoseconds.

    Faster travel translates to more efficient energy transfer, because the longer the journey takes, the more energy is lost during the transfer.

    “When a photon gets absorbed, you only have so long before that energy gets lost through unwanted processes such as nonradiative decay, so the faster it can get converted, the more efficient it will be,” Schlau-Cohen says.

    The researchers also found that proteins arranged in a lattice structure showed less efficient energy transfer than proteins that were arranged in randomly organized structures, as they usually are in living cells.

    “Ordered organization is actually less efficient than the disordered organization of biology, which we think is really interesting because biology tends to be disordered. This finding tells us that that may not just be an inevitable downside of biology, but organisms may have evolved to take advantage of it,” Schlau-Cohen says.

    Now that they have established the ability to measure inter-protein energy transfer, the researchers plan to explore energy transfer between other proteins, such as the transfer between proteins of the antenna to proteins of the reaction center. They also plan to study energy transfer between antenna proteins found in organisms other than purple bacteria, such as green plants.

    The research was funded primarily by the U.S. Department of Energy. More

  • in

    MIT engineering students take on the heat of Miami

    Think back to the last time you had to wait for a bus. How miserable were you? If you were in Boston, your experience might have included punishing wind and icy sleet — or, more recently, a punch of pollen straight to the sinuses. But in Florida’s Miami-Dade County, where the effects of climate change are both drastic and intensifying, commuters have to contend with an entirely different set of challenges: blistering temperatures and scorching humidity, making long stints waiting in the sun nearly unbearable.

    One of Miami’s most urgent transportation needs is shared by car-clogged Boston: coaxing citizens to use the municipal bus network, rather than the emissions-heavy individual vehicles currently contributing to climate change. But buses can be a tough sell in a sunny city where humidity hovers between 60 and 80 percent year-round. 

    Enter MIT’s Department of Electrical Engineering and Computer Science (EECS) and the MIT Priscilla King Gray (PKG) Public Service Center. The result of close collaboration between the two organizations, class 6.900 (Engineering For Impact) challenges EECS students to apply their engineering savvy to real-world problems beyond the MIT campus.

    This spring semester, the real-world problem was heat. 

    Miami-Dade County Department of Transportation and Public Works Chief Innovation Officer Carlos Cruz-Casas explains: “We often talk about the city we want to live in, about how the proper mix of public transportation, on-demand transit, and other mobility solutions, such as e-bikes and e-scooters, could help our community live a car-light life. However, none of this will be achievable if the riders are not comfortable when doing so.” 

    “When people think of South Florida and climate change, they often think of sea level rise,” says Juan Felipe Visser, deputy director of equity and engagement within the Office of the Mayor in Miami-Dade. “But heat really is the silent killer. So the focus of this class, on heat at bus stops, is very apt.” With little tree cover to give relief at some of the hottest stops, Miami-Dade commuters cluster in tiny patches of shade behind bus stops, sometimes giving up when the heat becomes unbearable. 

    A more conventional electrical engineering course might use temperature monitoring as an abstract example, building sample monitors in isolation and grading them as a merely academic exercise. But Professor Joel Volman, EECS faculty head of electrical engineering, and Joe Steinmeyer, senior lecturer in EECS, had something more impactful in mind.

    “Miami-Dade has a large population of people who are living in poverty, undocumented, or who are otherwise marginalized,” says Voldman. “Waiting, sometimes for a very long time, in scorching heat for the bus is just one aspect of how a city population can be underserved, but by measuring patterns in how many people are waiting for a bus, how long they wait, and in what conditions, we can begin to see where services are not keeping up with demand.”

    Only after that gap is quantified can the work of city and transportation planners begin, Cruz-Casas explains: “We needed to quantify the time riders are exposed to extreme heat and prioritize improvements, including on-time performance improvements, increasing service frequency, or looking to enhance the tree canopy near the bus stop.” 

    Quantifying that time — and the subjective experience of the wait — proved tricky, however. With over 7,500 bus stops along 101 bus routes, Miami-Dade’s transportation network presents a considerable data-collection challenge. A network of physical temperature monitors could be useful, but only if it were carefully calibrated to meet the budgetary, environmental, privacy, and implementation requirements of the city. But how do you work with city officials — not to mention all of bus-riding Miami — from over 2,000 miles away? 

    This is where the PKG Center comes in. “We are a hub and a connector and facilitator of best practices,” explains Jill Bassett, associate dean and director of the center, who worked with Voldman and Steinmeyer to find a municipal partner organization for the course. “We bring knowledge of current pedagogy around community-engaged learning, which includes: help with framing a partnership that centers community-identified concerns and is mutually beneficial; identifying and learning from a community partner; talking through ways to build in opportunities for student learners to reflect on power dynamics, reciprocity, systems thinking, long-term planning, continuity, ethics, all the types of things that come up with this kind of shared project.”

    Through a series of brainstorming conversations, Bassett helped Voldman and Steinmeyer structure a well-defined project plan, as Cruz-Casas weighed in on the county’s needed technical specifications (including affordability, privacy protection, and implementability).

    “This course brings together a lot of subject area experts,” says Voldman. “We brought in guest lecturers, including Abby Berenson from the Sloan Leadership Center, to talk about working in teams; engineers from BOSE to talk about product design, certification, and environmental resistance; the co-founder and head of engineering from MIT spinout Butlr to talk about their low-power occupancy sensor; Tony Hu from MIT IDM [Integrated Design and Management] to talk about industrial design; and Katrina LaCurts from EECS to talk about communications and networking.”

    With the support of two generous donations and a gift of software from Altium, 6.900 developed into a hands-on exercise in hardware/software product development with a tangible goal in sight: build a better bus monitor.

    The challenges involved in this undertaking became apparent as soon as the 6.900 students began designing their monitors. “The most challenging requirement to meet was that the monitor be able to count how many people were waiting — and for how long they’d been standing there — while still maintaining privacy,” says Fabian Velazquez ’23 a recent EECS graduate. The task was complicated by commuters’ natural tendency to stand where the shade goes — whether beneath a tree or awning or snaking against a nearby wall in a line — rather than directly next to the bus sign or inside the bus shelter. “Accurately measuring people count with a camera — the most straightforward choice — is already quite difficult since you have to incorporate machine learning to identify which objects in frame are people. Maintaining privacy added an extra layer of constraint … since there is no guarantee the collected data wouldn’t be vulnerable.”

    As the groups weighed various privacy-preserving options, including lidar, radar, and thermal imaging, the class realized that Wi-Fi “sniffers,” which count the number of Wi-Fi enabled signals in the immediate area, were their best option to count waiting passengers. “We were all excited and ready for this amazing, answer-to-all-our-problems radar sensor to count people,” says Velasquez. “That component was extremely complex, however, and the complexity would have ultimately made my team use a lot of time and resources to integrate with our system. We also had a short time-to-market for this system we developed. We made the trade-off of complexity for robustness.” 

    The weather also posed its own set of challenges. “Environmental conditions were big factors on the structure and design of our devices,” says Yong Yan (Crystal) Liang, a rising junior majoring in EECS. “We incorporated humidity and temperature sensors into our data to show the weather at individual stops. Additionally, we also considered how our enclosure may be affected by extreme heat or potential hurricanes.”

    The heat variable proved problematic in multiple ways. “People detection was especially difficult, for in the Miami heat, thermal cameras may not be able to distinguish human body temperature from the surrounding air temperature, and the glare of the sun off of other surfaces in the area makes most forms of imaging very buggy,” says Katherine Mohr ’23. “My team had considered using mmWave sensors to get around these constraints, but we found the processing to be too difficult, and (like the rest of the class), we decided to only move forward with Wi-Fi/BLE [Bluetooth Low Energy] sniffers.”

    The most valuable component of the new class may well have been the students’ exposure to real-world hardware/software engineering product development, where limitations on time and budget always exist, and where client requests must be carefully considered.  “Having an actual client to work with forced us to learn how to turn their wants into more specific technical specifications,” says Mohr. “We chose deliverables each week to complete by Friday, prioritizing tasks which would get us to a minimum viable product, as well as tasks that would require extra manufacturing time, like designing the printed-circuit board and enclosure.”

    Play video

    Joel Voldman, who co-designed 6.900 (Engineering For Impact) with Joe Steinmeyer and MIT’s Priscilla King Gray (PKG) Public Service Center, describes how the course allowed students help develop systems for the public good. Voldman is the winner of the 2023 Teaching with Digital Technology Award, which is co-sponsored by MIT Open Learning and the Office of the Vice Chancellor. Video: MIT Open Learning

    Crystal Liang counted her conversations with city representatives as among her most valuable 6.900 experiences. “We generated a lot of questions and were able to communicate with the community leaders of this project from Miami-Dade, who made time to answer all of them and gave us ideas from the goals they were trying to achieve,” she reports. “This project gave me a new perspective on problem-solving because it taught me to see things from the community members’ point of view.” Some of those community leaders, including Marta Viciedo, co-founder of Transit Alliance Miami, joined the class’s final session on May 16 to review the students’ proposed solutions. 

    The students’ thoughtful approach paid off when it was time to present the heat monitors to the class’s client. In a group conference call with Miami-Dade officials toward the end of the semester, the student teams shared their findings and the prototypes they’d created, along with videos of the devices at work. Juan Felipe Visser was among those in attendance. “This is a lot of work,” he told the students following their presentation. “So first of all, thank you for doing that, and for presenting to us. I love the concept. I took the bus this morning, as I do every morning, and was battered by the sun and the heat. So I personally appreciated the focus.” 

    Cruz-Casas agreed: “I am pleasantly surprised by the diverse approach the students are taking. We presented a challenge, and they have responded to it and managed to think beyond the problem at hand. I’m very optimistic about how the outcomes of this project will have a long-lasting impact for our community. At a minimum, I’m thinking that the more awareness we raise about this topic, the more opportunities we have to have the brightest minds seeking for a solution.”

    The creators of 6.900 agree, and hope that their class helps more MIT engineers to broaden their perspective on the meaning and application of their work. 

    “We are really excited about students applying their skills within a real-world, complex environment that will impact real people,” says Bassett. “We are excited that they are learning that it’s not just the design of technology that matters, but that climate; environment and built environment; and issues around socioeconomics, race, and equity, all come into play. There are layers and layers to the creation and deployment of technology in a demographically diverse multilingual community that is at the epicenter of climate change.” More

  • in

    A new mathematical “blueprint” is accelerating fusion device development

    Developing commercial fusion energy requires scientists to understand sustained processes that have never before existed on Earth. But with so many unknowns, how do we make sure we’re designing a device that can successfully harness fusion power?

    We can fill gaps in our understanding using computational tools like algorithms and data simulations to knit together experimental data and theory, which allows us to optimize fusion device designs before they’re built, saving much time and resources.

    Currently, classical supercomputers are used to run simulations of plasma physics and fusion energy scenarios, but to address the many design and operating challenges that still remain, more powerful computers are a necessity, and of great interest to plasma researchers and physicists.

    Quantum computers’ exponentially faster computing speeds have offered plasma and fusion scientists the tantalizing possibility of vastly accelerated fusion device development. Quantum computers could reconcile a fusion device’s many design parameters — for example, vessel shape, magnet spacing, and component placement — at a greater level of detail, while also completing the tasks faster. However, upgrading to a quantum computer is no simple task.

    In a paper, “Dyson maps and unitary evolution for Maxwell equations in tensor dielectric media,” recently published in Physics Review A, Abhay K. Ram, a research scientist at the MIT Plasma Science and Fusion Center (PSFC), and his co-authors Efstratios Koukoutsis, Kyriakos Hizanidis, and George Vahala present a framework that would facilitate the use of quantum computers to study electromagnetic waves in plasma and its manipulation in magnetic confinement fusion devices.

    Quantum computers excel at simulating quantum physics phenomena, but many topics in plasma physics are predicated on the classical physics model. A plasma (which is the “dielectric media” referenced in the paper’s title) consists of many particles — electrons and ions — the collective behaviors of which are effectively described using classic statistical physics. In contrast, quantum effects that influence atomic and subatomic scales are averaged out in classical plasma physics.  

    Furthermore, the descriptive limitations of quantum mechanics aren’t suited to plasma. In a fusion device, plasmas are heated and manipulated using electromagnetic waves, which are one of the most important and ubiquitous occurrences in the universe. The behaviors of electromagnetic waves, including how waves are formed and interact with their surroundings, are described by Maxwell’s equations — a foundational component of classical plasma physics, and of general physics as well. The standard form of Maxwell’s equations is not expressed in “quantum terms,” however, so implementing the equations on a quantum computer is like fitting a square peg in a round hole: it doesn’t work.

    Consequently, for plasma physicists to take advantage of quantum computing’s power for solving problems, classical physics must be translated into the language of quantum mechanics. The researchers tackled this translational challenge, and in their paper, they reveal that a Dyson map can bridge the translational divide between classical physics and quantum mechanics. Maps are mathematical functions that demonstrate how to take an input from one kind of space and transform it to an output that is meaningful in a different kind of space. In the case of Maxwell’s equations, a Dyson map allows classical electromagnetic waves to be studied in the space utilized by quantum computers. In essence, it reconfigures the square peg so it will fit into the round hole without compromising any physics.

    The work also gives a blueprint of a quantum circuit encoded with equations expressed in quantum bits (“qubits”) rather than classical bits so the equations may be used on quantum computers. Most importantly, these blueprints can be coded and tested on classical computers.

    “For years we have been studying wave phenomena in plasma physics and fusion energy science using classical techniques. Quantum computing and quantum information science is challenging us to step out of our comfort zone, thereby ensuring that I have not ‘become comfortably numb,’” says Ram, quoting a Pink Floyd song.

    The paper’s Dyson map and circuits have put quantum computing power within reach, fast-tracking an improved understanding of plasmas and electromagnetic waves, and putting us that much closer to the ideal fusion device design.    More