More stories

  • in

    AI pilot programs look to reduce energy use and emissions on MIT campus

    Smart thermostats have changed the way many people heat and cool their homes by using machine learning to respond to occupancy patterns and preferences, resulting in a lower energy draw. This technology — which can collect and synthesize data — generally focuses on single-dwelling use, but what if this type of artificial intelligence could dynamically manage the heating and cooling of an entire campus? That’s the idea behind a cross-departmental effort working to reduce campus energy use through AI building controls that respond in real-time to internal and external factors. 

    Understanding the challenge

    Heating and cooling can be an energy challenge for campuses like MIT, where existing building management systems (BMS) can’t respond quickly to internal factors like occupancy fluctuations or external factors such as forecast weather or the carbon intensity of the grid. This results in using more energy than needed to heat and cool spaces, often to sub-optimal levels. By engaging AI, researchers have begun to establish a framework to understand and predict optimal temperature set points (the temperature at which a thermostat has been set to maintain) at the individual room level and take into consideration a host of factors, allowing the existing systems to heat and cool more efficiently, all without manual intervention. 

    “It’s not that different from what folks are doing in houses,” explains Les Norford, a professor of architecture at MIT, whose work in energy studies, controls, and ventilation connected him with the effort. “Except we have to think about things like how long a classroom may be used in a day, weather predictions, time needed to heat and cool a room, the effect of the heat from the sun coming in the window, and how the classroom next door might impact all of this.” These factors are at the crux of the research and pilots that Norford and a team are focused on. That team includes Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; Audun Botterud, principal research scientist for the Laboratory for Information and Decision Systems; Steve Lanou, project manager in the MIT Office of Sustainability (MITOS); Fran Selvaggio, Department of Facilities Senior Building Management Systems engineer; and Daisy Green and You Lin, both postdocs.

    The group is organized around the call to action to “explore possibilities to employ artificial intelligence to reduce on-campus energy consumption” outlined in Fast Forward: MIT’s Climate Action Plan for the Decade, but efforts extend back to 2019. “As we work to decarbonize our campus, we’re exploring all avenues,” says Vice President for Campus Services and Stewardship Joe Higgins, who originally pitched the idea to students at the 2019 MIT Energy Hack. “To me, it was a great opportunity to utilize MIT expertise and see how we can apply it to our campus and share what we learn with the building industry.” Research into the concept kicked off at the event and continued with undergraduate and graduate student researchers running differential equations and managing pilots to test the bounds of the idea. Soon, Gregory, who is also a MITOS faculty fellow, joined the project and helped identify other individuals to join the team. “My role as a faculty fellow is to find opportunities to connect the research community at MIT with challenges MIT itself is facing — so this was a perfect fit for that,” Gregory says. 

    Early pilots of the project focused on testing thermostat set points in NW23, home to the Department of Facilities and Office of Campus Planning, but Norford quickly realized that classrooms provide many more variables to test, and the pilot was expanded to Building 66, a mixed-use building that is home to classrooms, offices, and lab spaces. “We shifted our attention to study classrooms in part because of their complexity, but also the sheer scale — there are hundreds of them on campus, so [they offer] more opportunities to gather data and determine parameters of what we are testing,” says Norford. 

    Developing the technology

    The work to develop smarter building controls starts with a physics-based model using differential equations to understand how objects can heat up or cool down, store heat, and how the heat may flow across a building façade. External data like weather, carbon intensity of the power grid, and classroom schedules are also inputs, with the AI responding to these conditions to deliver an optimal thermostat set point each hour — one that provides the best trade-off between the two objectives of thermal comfort of occupants and energy use. That set point then tells the existing BMS how much to heat up or cool down a space. Real-life testing follows, surveying building occupants about their comfort. Botterud, whose research focuses on the interactions between engineering, economics, and policy in electricity markets, works to ensure that the AI algorithms can then translate this learning into energy and carbon emission savings. 

    Currently the pilots are focused on six classrooms within Building 66, with the intent to move onto lab spaces before expanding to the entire building. “The goal here is energy savings, but that’s not something we can fully assess until we complete a whole building,” explains Norford. “We have to work classroom by classroom to gather the data, but are looking at a much bigger picture.” The research team used its data-driven simulations to estimate significant energy savings while maintaining thermal comfort in the six classrooms over two days, but further work is needed to implement the controls and measure savings across an entire year. 

    With significant savings estimated across individual classrooms, the energy savings derived from an entire building could be substantial, and AI can help meet that goal, explains Botterud: “This whole concept of scalability is really at the heart of what we are doing. We’re spending a lot of time in Building 66 to figure out how it works and hoping that these algorithms can be scaled up with much less effort to other rooms and buildings so solutions we are developing can make a big impact at MIT,” he says.

    Part of that big impact involves operational staff, like Selvaggio, who are essential in connecting the research to current operations and putting them into practice across campus. “Much of the BMS team’s work is done in the pilot stage for a project like this,” he says. “We were able to get these AI systems up and running with our existing BMS within a matter of weeks, allowing the pilots to get off the ground quickly.” Selvaggio says in preparation for the completion of the pilots, the BMS team has identified an additional 50 buildings on campus where the technology can easily be installed in the future to start energy savings. The BMS team also collaborates with the building automation company, Schneider Electric, that has implemented the new control algorithms in Building 66 classrooms and is ready to expand to new pilot locations. 

    Expanding impact

    The successful completion of these programs will also open the possibility for even greater energy savings — bringing MIT closer to its decarbonization goals. “Beyond just energy savings, we can eventually turn our campus buildings into a virtual energy network, where thousands of thermostats are aggregated and coordinated to function as a unified virtual entity,” explains Higgins. These types of energy networks can accelerate power sector decarbonization by decreasing the need for carbon-intensive power plants at peak times and allowing for more efficient power grid energy use.

    As pilots continue, they fulfill another call to action in Fast Forward — for campus to be a “test bed for change.” Says Gregory: “This project is a great example of using our campus as a test bed — it brings in cutting-edge research to apply to decarbonizing our own campus. It’s a great project for its specific focus, but also for serving as a model for how to utilize the campus as a living lab.” More

  • in

    Harnessing hydrogen’s potential to address long-haul trucking emissions

    The transportation of goods forms the basis of today’s globally distributed supply chains, and long-haul trucking is a central and critical link in this complex system. To meet climate goals around the world, it is necessary to develop decarbonized solutions to replace diesel powertrains, but given trucking’s indispensable and vast role, these solutions must be both economically viable and practical to implement. While hydrogen-based options, as an alternative to diesel, have the potential to become a promising decarbonization strategy, hydrogen has significant limitations when it comes to delivery and refueling.These roadblocks, combined with hydrogen’s compelling decarbonization potential, are what motivated a team of MIT researchers led by William H. Green, the Hoyt Hottel Professor in Chemical Engineering, to explore a cost-effective way to transport and store hydrogen using liquid organic hydrogen carriers (LOHCs). The team is developing a disruptive technology that allows LOHCs to not only deliver the hydrogen to the trucks, but also store the hydrogen onboard.Their findings were recently published in Energy and Fuels, a peer-reviewed journal of the American Chemical Society, in a paper titled “Perspective on Decarbonizing Long-Haul Trucks Using Onboard Dehydrogenation of Liquid Organic Hydrogen Carriers.” The MIT team is led by Green, and includes graduate students Sayandeep Biswas and Kariana Moreno Sader. Their research is supported by the MIT Climate and Sustainability Consortium (MCSC) through its Seed Awards program and MathWorks, and ties into the work within the MCSC’s Tough Transportation Modes focus area.An “onboard” approachCurrently, LOHCs, which work within existing retail fuel distribution infrastructure, are used to deliver hydrogen gas to refueling stations, where it is then compressed and delivered onto trucks equipped with hydrogen fuel cell or combustion engines.“This current approach incurs significant energy loss due to endothermic hydrogen release and compression at the retail station” says Green. “To address this, our work is exploring a more efficient application, with LOHC-powered trucks featuring onboard dehydrogenation.”To implement such a design, the team aims to modify the truck’s powertrain (the system inside a vehicle that produces the energy to propel it forward) to allow onboard hydrogen release from the LOHCs, using waste heat from the engine exhaust to power the “dehydrogenation” process. 

    Proposed process flow diagram for onboard dehydrogenation. Component sizes are not to scale and have been enlarged for illustrative purposes.

    Image courtesy of the Green Group.

    Previous item
    Next item

    The dehydrogenation process happens within a high-temperature reactor, which continually receives hydrogen-rich LOHCs from the fuel storage tank. Hydrogen released from the reactor is fed to the engine, after passing through a separator to remove any lingering LOHC. On its way to the engine, some of the hydrogen gets diverted to a burner to heat the reactor, which helps to augment the reactor heating provided by the engine exhaust gases.Acknowledging and addressing hydrogen’s drawbacksThe team’s paper underscores that current uses of hydrogen, including LOHC systems, to decarbonize the trucking sector have drawbacks. Regardless of technical improvements, these existing options remain prohibitively expensive due to the high cost of retail hydrogen delivery.“We present an alternative option that addresses a lot of the challenges and seems to be a viable way in which hydrogen can be used in this transportation context,” says Biswas, who was recently elected to the MIT Martin Family Society of Fellows for Sustainability for his work in this area. “Hydrogen, when used through LOHCs, has clear benefits for long-hauling, such as scalability and fast refueling time. There is also an enormous potential to improve delivery and refueling to further reduce cost, and our system is working to do that.”“Utilizing hydrogen is an option that is globally accessible, and could be extended to countries like the one where I am from,” says Moreno Sader, who is originally from Colombia. “Since it synergizes with existing infrastructure, large upfront investments are not necessary. The global applicability is huge.”Moreno Sader is a MathWorks Fellow, and, along with the rest of the team, has been using MATLAB tools to develop models and simulations for this work.Different sectors coming togetherDecarbonizing transportation modes, including long-haul trucking, requires expertise and perspectives from different industries — an approach that resonates with the MCSC’s mission.The team’s groundbreaking research into LOHC-powered trucking is among several projects supported by the MCSC within its Tough Transportation Modes focus area, led by postdoc Impact Fellow Danika MacDonell. The MCSC-supported projects were chosen to tackle a complementary set of societally important and industry-relevant challenges to decarbonizing heavy-duty transportation, which span a range of sectors and solution pathways. Other projects focus, for example, on logistics optimization for electrified trucking fleets, or air quality and climate impacts of ammonia-powered shipping.The MCSC works to support and amplify the impact of these projects by engaging the research teams with industry partners from a variety of sectors. In addition, the MCSC pursues a collective multisectoral approach to decarbonizing transportation by facilitating shared learning across the different projects through regular cross-team discussion.The research led by Green celebrates this cross-sector theme by integrating industry-leading computing tools provided by MathWorks with cutting-edge developments in chemical engineering, as well as industry-leading commercial LOHC reactor demonstrations, to build a compelling vision for cost-effective LOHC-powered trucking.The review and research conducted in the Energy and Fuels article lays the groundwork for further investigations into LOHC-powered truck design. The development of such a vehicle — with a power-dense, efficient, and robust onboard hydrogen release system — requires dedicated investigations and further optimization of core components geared specifically toward the trucking application. More

  • in

    Technologies for water conservation and treatment move closer to commercialization

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) provides Solutions Grants to help MIT researchers launch startup companies or products to commercialize breakthrough technologies in water and food systems. The Solutions Grant Program began in 2015 and is supported by Community Jameel. In addition to one-year, renewable grants of up to $150,000, the program also matches grantees with industry mentors and facilitates introductions to potential investors. Since its inception, the J-WAFS Solutions Program has awarded over $3 million in funding to the MIT community. Numerous startups and products, including a portable desalination device and a company commercializing a novel food safety sensor, have spun out of this support.

    The 2023 J-WAFS Solutions Grantees are Professor C. Cem Tasan of the Department of Materials Science and Engineering and Professor Andrew Whittle of the Department of Civil and Environmental Engineering. Tasan’s project involves reducing water use in steel manufacturing and Whittle’s project tackles harmful algal blooms in water. Project work commences this September.

    “This year’s Solutions Grants are being award to professors Tasan and Whittle to help commercialize technologies they have been developing at MIT,” says J-WAFS executive director Renee J. Robins. “With J-WAFS’ support, we hope to see the teams move their technologies from the lab to the market, so they can have a beneficial impact on water use and water quality challenges,” Robins adds.

    Reducing water consumption by solid-state steelmaking

    Water is a major requirement for steel production. The steel industry ranks fourth in industrial freshwater consumption worldwide, since large amounts of water are needed mainly for cooling purposes in the process. Unfortunately, a strong correlation has also been shown to exist between freshwater use in steelmaking and water contamination. As the global demand for steel increases and freshwater availability decreases due to climate change, improved methods for more sustainable steel production are needed.

    A strategy to reduce the water footprint of steelmaking is to explore steel recycling processes that avoid liquid metal processing. With this motivation, Cem Tasan, the Thomas B. King Associate Professor of Metallurgy in the Department of Materials Science and Engineering, and postdoc Onur Guvenc PhD created a new process called Scrap Metal Consolidation (SMC). SMC is based on a well-established metal forming process known as roll bonding. Conventionally, roll bonding requires intensive prior surface treatment of the raw material, specific atmospheric conditions, and high deformation levels. Tasan and Guvenc’s research revealed that SMC can overcome these restrictions by enabling the solid-state bonding of scrap into a sheet metal form, even when the surface quality, atmospheric conditions, and deformation levels are suboptimal. Through lab-scale proof-of-principle investigations, they have already identified SMC process conditions and validated the mechanical formability of resulting steel sheets, focusing on mild steel, the most common sheet metal scrap.

    The J-WAFS Solutions Grant will help the team to build customer product prototypes, design the processing unit, and develop a scale-up strategy and business model. By simultaneously decreasing water usage, energy demand, contamination risk, and carbon dioxide burden, SMC has the potential to decrease the energy need for steel recycling by up to 86 percent, as well as reduce the linked carbon dioxide emissions and safeguard the freshwater resources that would otherwise be directed to industrial consumption. 

    Detecting harmful algal blooms in water before it’s too late

    Harmful algal blooms (HABs) are a growing problem in both freshwater and saltwater environments worldwide, causing an estimated $13 billion in annual damage to drinking water, water for recreational use, commercial fishing areas, and desalination activities. HABs pose a threat to both human health and aquaculture, thereby threatening the food supply. Toxins in HABs are produced by some cyanobacteria, or blue-green algae, whose communities change in composition in response to eutrophication from agricultural runoff, sewer overflows, or other events. Mitigation of risks from HABs are most effective when there is advance warning of these changes in algal communities. 

    Most in situ measurements of algae are based on fluorescence spectroscopy that is conducted with LED-induced fluorescence (LEDIF) devices, or probes that induce fluorescence of specific algal pigments using LED light sources. While LEDIFs provide reasonable estimates of concentrations of individual pigments, they lack resolution to discriminate algal classes within complex mixtures found in natural water bodies. In prior research, Andrew Whittle, the Edmund K. Turner Professor of Civil and Environmental Engineering, worked with colleagues to design REMORA, a low-cost, field-deployable prototype spectrofluorometer for measuring induced fluorescence. This research was part of a collaboration between MIT and the AMS Institute. Whittle and the team successfully trained a machine learning model to discriminate and quantify cell concentrations for mixtures of different algal groups in water samples through an extensive laboratory calibration program using various algae cultures. The group demonstrated these capabilities in a series of field measurements at locations in Boston and Amsterdam. 

    Whittle will work with Fábio Duarte of the Department of Urban Studies and Planning, the Senseable City Lab, and MIT’s Center for Real Estate to refine the design of REMORA. They will develop software for autonomous operation of the sensor that can be deployed remotely on mobile vessels or platforms to enable high-resolution spatiotemporal monitoring for harmful algae. Sensor commercialization will hopefully be able to exploit the unique capabilities of REMORA for long-term monitoring applications by water utilities, environmental regulatory agencies, and water-intensive industries.  More

  • in

    Study suggests energy-efficient route to capturing and converting CO2

    In the race to draw down greenhouse gas emissions around the world, scientists at MIT are looking to carbon-capture technologies to decarbonize the most stubborn industrial emitters.

    Steel, cement, and chemical manufacturing are especially difficult industries to decarbonize, as carbon and fossil fuels are inherent ingredients in their production. Technologies that can capture carbon emissions and convert them into forms that feed back into the production process could help to reduce the overall emissions from these “hard-to-abate” sectors.

    But thus far, experimental technologies that capture and convert carbon dioxide do so as two separate processes, that themselves require a huge amount of energy to run. The MIT team is looking to combine the two processes into one integrated and far more energy-efficient system that could potentially run on renewable energy to both capture and convert carbon dioxide from concentrated, industrial sources.

    In a study appearing today in ACS Catalysis, the researchers reveal the hidden functioning of how carbon dioxide can be both captured and converted through a single electrochemical process. The process involves using an electrode to attract carbon dioxide released from a sorbent, and to convert it into a reduced, reusable form.

    Others have reported similar demonstrations, but the mechanisms driving the electrochemical reaction have remained unclear. The MIT team carried out extensive experiments to determine that driver, and found that, in the end, it came down to the partial pressure of carbon dioxide. In other words, the more pure carbon dioxide that makes contact with the electrode, the more efficiently the electrode can capture and convert the molecule.

    Knowledge of this main driver, or “active species,” can help scientists tune and optimize similar electrochemical systems to efficiently capture and convert carbon dioxide in an integrated process.

    The study’s results imply that, while these electrochemical systems would probably not work for very dilute environments (for instance, to capture and convert carbon emissions directly from the air), they would be well-suited to the highly concentrated emissions generated by industrial processes, particularly those that have no obvious renewable alternative.

    “We can and should switch to renewables for electricity production. But deeply decarbonizing industries like cement or steel production is challenging and will take a longer time,” says study author Betar Gallant, the Class of 1922 Career Development Associate Professor at MIT. “Even if we get rid of all our power plants, we need some solutions to deal with the emissions from other industries in the shorter term, before we can fully decarbonize them. That’s where we see a sweet spot, where something like this system could fit.”

    The study’s MIT co-authors are lead author and postdoc Graham Leverick and graduate student Elizabeth Bernhardt, along with Aisyah Illyani Ismail, Jun Hui Law, Arif Arifutzzaman, and Mohamed Kheireddine Aroua of Sunway University in Malaysia.

    Breaking bonds

    Carbon-capture technologies are designed to capture emissions, or “flue gas,” from the smokestacks of power plants and manufacturing facilities. This is done primarily using large retrofits to funnel emissions into chambers filled with a “capture” solution — a mix of amines, or ammonia-based compounds, that chemically bind with carbon dioxide, producing a stable form that can be separated out from the rest of the flue gas.

    High temperatures are then applied, typically in the form of fossil-fuel-generated steam, to release the captured carbon dioxide from its amine bond. In its pure form, the gas can then be pumped into storage tanks or underground, mineralized, or further converted into chemicals or fuels.

    “Carbon capture is a mature technology, in that the chemistry has been known for about 100 years, but it requires really large installations, and is quite expensive and energy-intensive to run,” Gallant notes. “What we want are technologies that are more modular and flexible and can be adapted to more diverse sources of carbon dioxide. Electrochemical systems can help to address that.”

    Her group at MIT is developing an electrochemical system that both recovers the captured carbon dioxide and converts it into a reduced, usable product. Such an integrated system, rather than a decoupled one, she says, could be entirely powered with renewable electricity rather than fossil-fuel-derived steam.

    Their concept centers on an electrode that would fit into existing chambers of carbon-capture solutions. When a voltage is applied to the electrode, electrons flow onto the reactive form of carbon dioxide and convert it to a product using protons supplied from water. This makes the sorbent available to bind more carbon dioxide, rather than using steam to do the same.

    Gallant previously demonstrated this electrochemical process could work to capture and convert carbon dioxide into a solid carbonate form.

    “We showed that this electrochemical process was feasible in very early concepts,” she says. “Since then, there have been other studies focused on using this process to attempt to produce useful chemicals and fuels. But there’s been inconsistent explanations of how these reactions work, under the hood.”

    Solo CO2

    In the new study, the MIT team took a magnifying glass under the hood to tease out the specific reactions driving the electrochemical process. In the lab, they generated amine solutions that resemble the industrial capture solutions used to extract carbon dioxide from flue gas. They methodically altered various properties of each solution, such as the pH, concentration, and type of amine, then ran each solution past an electrode made from silver — a metal that is widely used in electrolysis studies and known to efficiently convert carbon dioxide to carbon monoxide. They then measured the concentration of carbon monoxide that was converted at the end of the reaction, and compared this number against that of every other solution they tested, to see which parameter had the most influence on how much carbon monoxide was produced.

    In the end, they found that what mattered most was not the type of amine used to initially capture carbon dioxide, as many have suspected. Instead, it was the concentration of solo, free-floating carbon dioxide molecules, which avoided bonding with amines but were nevertheless present in the solution. This “solo-CO2” determined the concentration of carbon monoxide that was ultimately produced.

    “We found that it’s easier to react this ‘solo’ CO2, as compared to CO2 that has been captured by the amine,” Leverick offers. “This tells future researchers that this process could be feasible for industrial streams, where high concentrations of carbon dioxide could efficiently be captured and converted into useful chemicals and fuels.”

    “This is not a removal technology, and it’s important to state that,” Gallant stresses. “The value that it does bring is that it allows us to recycle carbon dioxide some number of times while sustaining existing industrial processes, for fewer associated emissions. Ultimately, my dream is that electrochemical systems can be used to facilitate mineralization, and permanent storage of CO2 — a true removal technology. That’s a longer-term vision. And a lot of the science we’re starting to understand is a first step toward designing those processes.”

    This research is supported by Sunway University in Malaysia. More

  • in

    Devices offers long-distance, low-power underwater communication

    MIT researchers have demonstrated the first system for ultra-low-power underwater networking and communication, which can transmit signals across kilometer-scale distances.

    This technique, which the researchers began developing several years ago, uses about one-millionth the power that existing underwater communication methods use. By expanding their battery-free system’s communication range, the researchers have made the technology more feasible for applications such as aquaculture, coastal hurricane prediction, and climate change modeling.

    “What started as a very exciting intellectual idea a few years ago — underwater communication with a million times lower power — is now practical and realistic. There are still a few interesting technical challenges to address, but there is a clear path from where we are now to deployment,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

    Underwater backscatter enables low-power communication by encoding data in sound waves that it reflects, or scatters, back toward a receiver. These innovations enable reflected signals to be more precisely directed at their source.

    Due to this “retrodirectivity,” less signal scatters in the wrong directions, allowing for more efficient and longer-range communication.

    When tested in a river and an ocean, the retrodirective device exhibited a communication range that was more than 15 times farther than previous devices. However, the experiments were limited by the length of the docks available to the researchers.

    To better understand the limits of underwater backscatter, the team also developed an analytical model to predict the technology’s maximum range. The model, which they validated using experimental data, showed that their retrodirective system could communicate across kilometer-scale distances.

    The researchers shared these findings in two papers which will be presented at this year’s ACM SIGCOMM and MobiCom conferences. Adib, senior author on both papers, is joined on the SIGCOMM paper by co-lead authors Aline Eid, a former postdoc who is now an assistant professor at the University of Michigan, and Jack Rademacher, a research assistant; as well as research assistants Waleed Akbar and Purui Wang, and postdoc Ahmed Allam. The MobiCom paper is also written by co-lead authors Akbar and Allam.

    Communicating with sound waves

    Underwater backscatter communication devices utilize an array of nodes made from “piezoelectric” materials to receive and reflect sound waves. These materials produce an electric signal when mechanical force is applied to them.

    When sound waves strike the nodes, they vibrate and convert the mechanical energy to an electric charge. The nodes use that charge to scatter some of the acoustic energy back to the source, transmitting data that a receiver decodes based on the sequence of reflections.

    But because the backscattered signal travels in all directions, only a small fraction reaches the source, reducing the signal strength and limiting the communication range.

    To overcome this challenge, the researchers leveraged a 70-year-old radio device called a Van Atta array, in which symmetric pairs of antennas are connected in such a way that the array reflects energy back in the direction it came from.

    But connecting piezoelectric nodes to make a Van Atta array reduces their efficiency. The researchers avoided this problem by placing a transformer between pairs of connected nodes. The transformer, which transfers electric energy from one circuit to another, allows the nodes to reflect the maximum amount of energy back to the source.

    “Both nodes are receiving and both nodes are reflecting, so it is a very interesting system. As you increase the number of elements in that system, you build an array that allows you to achieve much longer communication ranges,” Eid explains.

    In addition, they used a technique called cross-polarity switching to encode binary data in the reflected signal. Each node has a positive and a negative terminal (like a car battery), so when the positive terminals of two nodes are connected and the negative terminals of two nodes are connected, that reflected signal is a “bit one.”

    But if the researchers switch the polarity, and the negative and positive terminals are connected to each other instead, then the reflection is a “bit zero.”

    “Just connecting the piezoelectric nodes together is not enough. By alternating the polarities between the two nodes, we are able to transmit data back to the remote receiver,” Rademacher explains.

    When building the Van Atta array, the researchers found that if the connected nodes were too close, they would block each other’s signals. They devised a new design with staggered nodes that enables signals to reach the array from any direction. With this scalable design, the more nodes an array has, the greater its communication range.

    They tested the array in more than 1,500 experimental trials in the Charles River in Cambridge, Massachusetts, and in the Atlantic Ocean, off the coast of Falmouth, Massachusetts, in collaboration with the Woods Hole Oceanographic Institution. The device achieved communication ranges of 300 meters, more than 15 times longer than they previously demonstrated.

    However, they had to cut the experiments short because they ran out of space on the dock.

    Modeling the maximum

    That inspired the researchers to build an analytical model to determine the theoretical and practical communication limits of this new underwater backscatter technology.

    Building off their group’s work on RFIDs, the team carefully crafted a model that captured the impact of system parameters, like the size of the piezoelectric nodes and the input power of the signal, on the underwater operation range of the device.

    “It is not a traditional communication technology, so you need to understand how you can quantify the reflection. What are the roles of the different components in that process?” Akbar says.

    For instance, the researchers needed to derive a function that captures the amount of signal reflected out of an underwater piezoelectric node with a specific size, which was among the biggest challenges of developing the model, he adds.

    They used these insights to create a plug-and-play model into a which a user could enter information like input power and piezoelectric node dimensions and receive an output that shows the expected range of the system.

    They evaluated the model on data from their experimental trials and found that it could accurately predict the range of retrodirected acoustic signals with an average error of less than one decibel.

    Using this model, they showed that an underwater backscatter array can potentially achieve kilometer-long communication ranges.

    “We are creating a new ocean technology and propelling it into the realm of the things we have been doing for 6G cellular networks. For us, it is very rewarding because we are starting to see this now very close to reality,” Adib says.

    The researchers plan to continue studying underwater backscatter Van Atta arrays, perhaps using boats so they could evaluate longer communication ranges. Along the way, they intend to release tools and datasets so other researchers can build on their work. At the same time, they are beginning to move toward commercialization of this technology.

    “Limited range has been an open problem in underwater backscatter networks, preventing them from being used in real-world applications. This paper takes a significant step forward in the future of underwater communication, by enabling them to operate on minimum energy while achieving long range,” says Omid Abari, assistant professor of computer science at the University of California at Los Angeles, who was not involved with this work. “The paper is the first to bring Van Atta Reflector array technique into underwater backscatter settings and demonstrate its benefits in improving the communication range by orders of magnitude. This can take battery-free underwater communication one step closer to reality, enabling applications such as underwater climate change monitoring and coastal monitoring.”

    This research was funded, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization. More

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from the College of William and Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William and Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    New clean air and water labs to bring together researchers, policymakers to find climate solutions

    MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching the Clean Air and Water Labs, with support from Community Jameel, to generate evidence-based solutions aimed at increasing access to clean air and water.

    Led by J-PAL’s Africa, Middle East and North Africa (MENA), and South Asia regional offices, the labs will partner with government agencies to bring together researchers and policymakers in areas where impactful clean air and water solutions are most urgently needed.

    Together, the labs aim to improve clean air and water access by informing the scaling of evidence-based policies and decisions of city, state, and national governments that serve nearly 260 million people combined.

    The Clean Air and Water Labs expand the work of J-PAL’s King Climate Action Initiative, building on the foundational support of King Philanthropies, which significantly expanded J-PAL’s work at the nexus of climate change and poverty alleviation worldwide. 

    Air pollution, water scarcity and the need for evidence 

    Africa, MENA, and South Asia are on the front lines of global air and water crises. 

    “There is no time to waste investing in solutions that do not achieve their desired effects,” says Iqbal Dhaliwal, global executive director of J-PAL. “By co-generating rigorous real-world evidence with researchers, policymakers can have the information they need to dedicate resources to scaling up solutions that have been shown to be effective.”

    In India, about 75 percent of households did not have drinking water on premises in 2018. In MENA, nearly 90 percent of children live in areas facing high or extreme water stress. Across Africa, almost 400 million people lack access to safe drinking water. 

    Simultaneously, air pollution is one of the greatest threats to human health globally. In India, extraordinary levels of air pollution are shortening the average life expectancy by five years. In Africa, rising indoor and ambient air pollution contributed to 1.1 million premature deaths in 2019. 

    There is increasing urgency to find high-impact and cost-effective solutions to the worsening threats to human health and resources caused by climate change. However, data and evidence on potential solutions are limited.

    Fostering collaboration to generate policy-relevant evidence 

    The Clean Air and Water Labs will foster deep collaboration between government stakeholders, J-PAL regional offices, and researchers in the J-PAL network. 

    Through the labs, J-PAL will work with policymakers to:

    co-diagnose the most pressing air and water challenges and opportunities for policy innovation;
    expand policymakers’ access to and use of high-quality air and water data;
    co-design potential solutions informed by existing evidence;
    co-generate evidence on promising solutions through rigorous evaluation, leveraging existing and new data sources; and
    support scaling of air and water policies and programs that are found to be effective through evaluation. 
    A research and scaling fund for each lab will prioritize resources for co-generated pilot studies, randomized evaluations, and scaling projects. 

    The labs will also collaborate with C40 Cities, a global network of mayors of the world’s leading cities that are united in action to confront the climate crisis, to share policy-relevant evidence and identify opportunities for potential new connections and research opportunities within India and across Africa.

    This model aims to strengthen the use of evidence in decision-making to ensure solutions are highly effective and to guide research to answer policymakers’ most urgent questions. J-PAL Africa, MENA, and South Asia’s strong on-the-ground presence will further bridge research and policy work by anchoring activities within local contexts. 

    “Communities across the world continue to face challenges in accessing clean air and water, a threat to human safety that has only been exacerbated by the climate crisis, along with rising temperatures and other hazards,” says George Richards, director of Community Jameel. “Through our collaboration with J-PAL and C40 in creating climate policy labs embedded in city, state, and national governments in Africa and South Asia, we are committed to innovative and science-based approaches that can help hundreds of millions of people enjoy healthier lives.”

    J-PAL Africa, MENA, and South Asia will formally launch Clean Air and Water Labs with government partners over the coming months. J-PAL is housed in the MIT Department of Economics, within the School of Humanities, Arts, and Social Sciences. More

  • in

    Tiny magnetic beads produce an optical signal that could be used to quickly detect pathogens

    Getting results from a blood test can take anywhere from one day to a week, depending on what a test is targeting. The same goes for tests of water pollution and food contamination. And in most cases, the wait time has to do with time-consuming steps in sample processing and analysis.

    Now, MIT engineers have identified a new optical signature in a widely used class of magnetic beads, which could be used to quickly detect contaminants in a variety of diagnostic tests. For example, the team showed the signature could be used to detect signs of the food contaminant Salmonella.

    The so-called Dynabeads are microscopic magnetic beads that can be coated with antibodies that bind to target molecules, such as a specific pathogen. Dynabeads are typically used in experiments in which they are mixed into solutions to capture molecules of interest. But from there, scientists have to take additional, time-consuming steps to confirm that the molecules are indeed present and bound to the beads.

    The MIT team found a faster way to confirm the presence of Dynabead-bound pathogens, using optics, specifically, Raman spectroscopy. This optical technique identifies specific molecules based on their “Raman signature,” or the unique way in which a molecule scatters light.

    The researchers found that Dynabeads have an unusually strong Raman signature that can be easily detected, much like a fluorescent tag. This signature, they found, can act as a “reporter.” If detected, the signal can serve as a quick confirmation, within less than one second, that a target pathogen is indeed present in a given sample. The team is currently working to develop a portable device for quickly detecting a range of bacterial pathogens, and their results will appear in an Emerging Investigators special issue of the Journal of Raman Spectroscopy.

    “This technique would be useful in a situation where a doctor is trying to narrow down the source of an infection in order to better inform antibiotic prescription, as well as for the detection of known pathogens in food and water,” says study co-author Marissa McDonald, a graduate student in the Harvard-MIT Program in Health Sciences and Technology. “Additionally, we hope this approach will eventually lead to expanded access to advanced diagnostics in resource-limited environments.”

    Study co-authors at MIT include Postdoctoral Associate Jongwan Lee; Visiting Scholar Nikiwe Mhlanga; Research Scientist Jeon Woong Kang; Tata Professor Rohit Karnik, who is also the associate director of the Abdul Latif Jameel Water and Food Systems Lab; and Assistant Professor Loza Tadesse of the Department of Mechanical Engineering.

    Oil and water

    Looking for diseased cells and pathogens in fluid samples is an exercise in patience.

    “It’s kind of a needle-in-a-haystack problem,” Tadesse says.

    The numbers present are so small that they must be grown in controlled environments to sufficient numbers, and their cultures stained, then studied under a microscope. The entire process can take several days to a week to yield a confident positive or negative result.

    Both Karnik and Tadesse’s labs have independently been developing techniques to speed up various parts of the pathogen testing process and make the process portable, using Dynabeads.

    Dynabeads are commercially available microscopic beads made from a magnetic iron core and a polymer shell that can be coated with antibodies. The surface antibodies act as hooks to bind specific target molecules. When mixed with a fluid, such as a vial of blood or water, any molecules present will glom onto the Dynabeads. Using a magnet, scientists can gently coax the beads to the bottom of a vial and filter them out of a solution. Karnik’s lab is investigating ways to then further separate the beads into those that are bound to a target molecule, and those that are not. “Still, the challenge is, how do we know that we have what we’re looking for?” Tadesse says.

    The beads themselves are not visible by eye. That’s where Tadesse’s work comes in. Her lab uses Raman spectroscopy as a way to “fingerprint” pathogens. She has found that different cell types scatter light in unique ways that can be used as a signature to identify them.

    In the team’s new work, she and her colleagues found that Dynabeads also have a unique and strong Raman signature that can act as a surprisingly clear beacon.

    “We were initially seeking to identify the signatures of bacteria, but the signature of the Dynabeads was actually very strong,” Tadesse says. “We realized this signal could be a means of reporting to you whether you have that bacteria or not.”

    Testing beacon

    As a practical demonstration, the researchers mixed Dynabeads into vials of water contaminated with Salmonella. They then magnetically isolated these beads onto microscope slides and measured the way light scattered through the fluid when exposed to laser light. Within half a second, they quickly detected the Dynabeads’ Raman signature — a confirmation that bound Dynabeads, and by inference, Salmonella, were present in the fluid.

    “This is something that can be used to rapidly give a positive or negative answer: Is there a contaminant or not?” Tadesse says. “Because even a handful of pathogens can cause clinical symptoms.”

    The team’s new technique is significantly faster than conventional methods and uses elements that could be adapted into smaller, more portable forms — a goal that the researchers are currently working toward. The approach is also highly versatile.

    “Salmonella is the proof of concept,” Tadesse says. “You could purchase Dynabeads with E.coli antibodies, and the same thing would happen: It would bind to the bacteria, and we’d be able to detect the Dynabead signature because the signal is super strong.”

    The team is particularly keen to apply the test to conditions such as sepsis, where time is of the essence, and where pathogens that trigger the condition are not rapidly detected using conventional lab tests.

    “There are a lot cases, like in sepsis, where pathogenic cells cannot always be grown on a plate,” says Lee, a member of Karnik’s lab. “In that case, our technique could rapidly detect these pathogens.”

    This research was supported, in part, by the MIT Laser Biomedical Research Center, the National Cancer Institute, and the Abdul Latif Jameel Water and Food Systems Lab at MIT. More