More stories

  • in

    Imaging technique removes the effect of water in underwater scenes

    The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.“With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.“Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.Aquatic opticsIn the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.“One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.A model swimIn their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.“Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.“This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation. More

  • in

    High-speed videos show what happens when a droplet splashes into a pool

    Rain can freefall at speeds of up to 25 miles per hour. If the droplets land in a puddle or pond, they can form a crown-like splash that, with enough force, can dislodge any surface particles and launch them into the air.Now MIT scientists have taken high-speed videos of droplets splashing into a deep pool, to track how the fluid evolves, above and below the water line, frame by millisecond frame. Their work could help to predict how spashing droplets, such as from rainstorms and irrigation systems, may impact watery surfaces and aerosolize surface particles, such as pollen on puddles or pesticides in agricultural runoff.The team carried out experiments in which they dispensed water droplets of various sizes and from various heights into a pool of water. Using high-speed imaging, they measured how the liquid pool deformed as the impacting droplet hit the pool’s surface.Across all their experiments, they observed a common splash evolution: As a droplet hit the pool, it pushed down below the surface to form a “crater,” or cavity. At nearly the same time, a wall of liquid rose above the surface, forming a crown. Interestingly, the team observed that small, secondary droplets were ejected from the crown before the crown reached its maximum height. This entire evolution happens in a fraction of a second.

    “This cylinder-like wall of rising liquid, and how it evolves in time and space, is at the heart of everything,” Lydia Bourouiba says. GIF has been edited down to 5 frames per second.

    Image: Courtesy of the researchers; edited by MIT News

    Previous item
    Next item

    Scientists have caught snapshots of droplet splashes in the past, such as the famous “Milk Drop Coronet” — a photo of a drop of milk in mid-splash, taken by the late MIT professor Harold “Doc” Edgerton, who invented a photographic technique to capture quickly moving objects.The new work represents the first time scientists have used such high-speed images to model the entire splash dynamics of a droplet in a deep pool, combining what happens both above and below the surface. The team has used the imaging to gather new data central to build a mathematical model that predicts how a droplet’s shape will morph and merge as it hits a pool’s surface. They plan to use the model as a baseline to explore to what extent a splashing droplet might drag up and launch particles from the water pool.“Impacts of drops on liquid layers are ubiquitous,” says study author Lydia Bourouiba, a professor in the MIT departments of Civil and Environmental Engineering and Mechanical Engineering, and a core member of the Institute for Medical Engineering and Science (IMES). “Such impacts can produce myriads of secondary droplets that could act as carriers for pathogens, particles, or microbes that are on the surface of impacted pools or contaminated water bodies. This work is key in enabling prediction of droplet size distributions, and potentially also what such drops can carry with them.”Bourouiba and her mentees have published their results in the Journal of Fluid Mechanics. MIT co-authors include former graduate student Raj Dandekar PhD ’22, postdoc (Eric) Naijian Shen, and student mentee Boris Naar.Above and belowAt MIT, Bourouiba heads up the Fluid Dynamics of Disease Transmission Laboratory, part of the Fluids and Health Network, where she and her team explore the fundamental physics of fluids and droplets in a range of environmental, energy, and health contexts, including disease transmission. For their new study, the team looked to better understand how droplets impact a deep pool — a seemingly simple phenomenon that nevertheless has been tricky to precisely capture and characterize.Bourouiba notes that there have been recent breakthroughs in modeling the evolution of a splashing droplet below a pool’s surface. As a droplet hits a pool of water, it breaks through the surface and drags air down through the pool to create a short-lived crater. Until now, scientists have focused on the evolution of this underwater cavity, mainly for applications in energy harvesting. What happens above the water, and how a droplet’s crown-like shape evolves with the cavity below, remained less understood.“The descriptions and understanding of what happens below the surface, and above, have remained very much divorced,” says Bourouiba, who believes such an understanding can help to predict how droplets launch and spread chemicals, particles, and microbes into the air.Splash in 3DTo study the coupled dynamics between a droplet’s cavity and crown, the team set up an experiment to dispense water droplets into a deep pool. For the purposes of their study, the researchers considered a deep pool to be a body of water that is deep enough that a splashing droplet would remain far away from the pool’s bottom. In these terms, they found that a pool with a depth of at least 20 centimeters was sufficient for their experiments.They varied each droplet’s size, with an average diameter of about 5 millimeters. They also dispensed droplets from various heights, causing the droplets to hit the pool’s surface at different speeds, which on average was about 5 meters per second. The overall dynamics, Bourouiba says, should be similar to what occurs on the surface of a puddle or pond during an average rainstorm.“This is capturing the speed at which raindrops fall,” she says. “These wouldn’t be very small, misty drops. This would be rainstorm drops for which one needs an umbrella.”Using high-speed imaging techniques inspired by Edgerton’s pioneering photography, the team captured videos of pool-splashing droplets, at rates of up to 12,500 frames per second. They then applied in-house imaging processing methods to extract key measurements from the image sequences, such as the changing width and depth of the underwater cavity, and the evolving diameter and height of the rising crown. The researchers also captured especially tricky measurements, of the crown’s wall thickness profile and inner flow — the cylinder that rises out of the pool, just before it forms a rim and points that are characteristic of a crown.“This cylinder-like wall of rising liquid, and how it evolves in time and space, is at the heart of everything,” Bourouiba says. “It’s what connects the fluid from the pool to what will go into the rim and then be ejected into the air through smaller, secondary droplets.”The researchers worked the image data into a set of “evolution equations,” or a mathematical model that relates the various properties of an impacting droplet, such as the width of its cavity and the thickness and speed profiles of its crown wall, and how these properties change over time, given a droplet’s starting size and impact speed.“We now have a closed-form mathematical expression that people can use to see how all these quantities of a splashing droplet change over space and time,” says co-author Shen, who plans, with Bourouiba, to apply the new model to the behavior of secondary droplets and understanding how a splash end-up dispersing particles such as pathogens and pesticides. “This opens up the possibility to study all these problems of splash in 3D, with self-contained closed-formed equations, which was not possible before.”This research was supported, in part, by the Department of Agriculture-National Institute of Food and Agriculture Specialty Crop Research Initiative; the Richard and Susan Smith Family Foundation; the National Science Foundation; the Centers for Disease Control and Prevention-National Institute for Occupational Safety and Health; Inditex; and the National Institute of Allergy and Infectious Diseases of the National Institutes of Health. More

  • in

    Chip-based system for terahertz waves could enable more efficient, sensitive electronics

    The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagel, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.Making wavesTerahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.But a property known as the dielectric constant gets in the way of a smooth transmission.The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal. The MIT researchers approached this problem differently.They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.A scalable approachThey chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program. More

  • in

    Surface-based sonar system could rapidly map the ocean floor at high resolution

    On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering’s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.

    Play video

    Autonomous Sparse-Aperture Multibeam Echo SounderVideo: MIT Lincoln Laboratory

    “Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”Underwater unknownOceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.A solution surfacesThe Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.”Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.  More

  • in

    New AI tool generates realistic satellite images of future flooding

    Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.Generative adversarial imagesThe new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”Flood hallucinationsIn their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud. More

  • in

    Angela Belcher delivers 2023 Dresselhaus Lecture on evolving organisms for new nanomaterials

    “How do we get to making nanomaterials that haven’t been evolved before?” asked Angela Belcher at the 2023 Mildred S. Dresselhaus Lecture at MIT on Nov. 20. “We can use elements that biology has already given us.”

    The combined in-person and virtual audience of over 300 was treated to a light-up, 3D model of M13 bacteriophage, a virus that only infects bacteria, complete with a pull-out strand of DNA. Belcher used the feather-boa-like model to show how her research group modifies the M13’s genes to add new DNA and peptide sequences to template inorganic materials.

    “I love controlling materials at the nanoscale using biology,” said Belcher, the James Mason Crafts Professor of Biological Engineering, materials science professor, and of the Koch Institute of Integrative Cancer Research at MIT. “We all know if you control materials at the nanoscale and you can start to tune them, then you can have all kinds of different applications.” And the opportunities are indeed vast — from building batteries, fuel cells, and solar cells to carbon sequestration and storage, environmental remediation, catalysis, and medical diagnostics and imaging.

    Belcher sprinkled her talk with models and props, lined up on a table at the front of the 10-250 lecture hall, to demonstrate a wide variety of concepts and projects made possible by the intersection of biology and nanotechnology.

    Play video

    2023 Mildred S. Dresselhaus Lecture: Angela BelcherVideo: MIT.nano

    Energy storage and environment

    “How do you go from a DNA sequence to a functioning battery?” posed Belcher. Grabbing a model of a large carbon nanotube, she explained how her group engineered a phage to pick up carbon nanotubes that would wind all the way around the virus and then fill in with different cathode or anode materials to make nanowires for battery electrodes.

    How about using the M13 bacteriophage to improve the environment? Belcher referred to a project by former student Geran Zhang PhD ’19 that proved the virus can be modified for this context, too. He used the phage to template high-surface-area, carbon-based materials that can grab small molecules and break them down, Belcher said, opening a realm of possibilities from cleaning up rivers to developing chemical warfare agents to combating smog.

    Belcher’s lab worked with the U.S. Army to produce protective clothing and masks made of these carbon-based virus nanofibers. “We went from five liters in our lab to a thousand liters, then 10,000 liters in the army labs where we’re able to make kilograms of the material,” Belcher said, stressing the importance of being able to test and prototype at scale.

    Imaging tools and therapeutics in cancer

    In the area of biomedical imaging, Belcher explained, a lot less is known in near-infrared imaging — imaging in wavelengths above 1,000 nanometers — than other imaging techniques, yet with near-infrared scientists can see much deeper inside the body. Belcher’s lab built their own systems to image at these wavelengths. The third generation of this system provides real-time, sub-millimeter optical imaging for guided surgery.

    Working with Sangeeta Bhatia, the John J. and Dorothy Wilson Professor of Engineering, Belcher used carbon nanotubes to build imaging tools that find tiny tumors during surgery that doctors otherwise would not be able to see. The tool is actually a virus engineered to carry with it a fluorescent, single-walled carbon nanotube as it seeks out the tumors.

    Nearing the end of her talk, Belcher presented a goal: to develop an accessible detection and diagnostic technology for ovarian cancer in five to 10 years.

    “We think that we can do it,” Belcher said. She described her students’ work developing a way to scan an entire fallopian tube, as opposed to just one small portion, to find pre-cancer lesions, and talked about the team of MIT faculty, doctors, and researchers working collectively toward this goal.

    “Part of the secret of life and the meaning of life is helping other people enjoy the passage of time,” said Belcher in her closing remarks. “I think that we can all do that by working to solve some of the biggest issues on the planet, including helping to diagnose and treat ovarian cancer early so people have more time to spend with their family.”

    Honoring Mildred S. Dresselhaus

    Belcher was the fifth speaker to deliver the Dresselhaus Lecture, an annual event organized by MIT.nano to honor the late MIT physics and electrical engineering Institute Professor Mildred Dresselhaus. The lecture features a speaker from anywhere in the world whose leadership and impact echo Dresselhaus’s life, accomplishments, and values.

    “Millie was and is a huge hero of mine,” said Belcher. “Giving a lecture in Millie’s name is just the greatest honor.”

    Belcher dedicated the talk to Dresselhaus, whom she described with an array of accolades — a trailblazer, a genius, an amazing mentor, teacher, and inventor. “Just knowing her was such a privilege,” she said.

    Belcher also dedicated her talk to her own grandmother and mother, both of whom passed away from cancer, as well as late MIT professors Susan Lindquist and Angelika Amon, who both died of ovarian cancer.

    “I’ve been so fortunate to work with just the most talented and dedicated graduate students, undergraduate students, postdocs, and researchers,” concluded Belcher. “It has been a pure joy to be in partnership with all of you to solve these very daunting problems.” More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More

  • in

    A new dataset of Arctic images will spur artificial intelligence research

    As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

    “This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment,” says Jo Kurucar, a researcher in Lincoln Laboratory’s AI Software Architectures and Algorithms Group, which led this project.

    As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

    Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

    The Healy is the USCG’s largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

    “Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, we’ve established ties that enabled the deployment of the CRISP system,” says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. “We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter.”

    The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.  

    The camera is mounted at the front of the ship’s fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

    “The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality,” says Michael Emily, the project’s IT systems administrator who traveled to Seattle for the install. Working with the ship’s crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren’t in fact accessible. “We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare,” Emily says.

    The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

    The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. “Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own,” Kurucar adds. “It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset.”

    On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

    Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

    Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: “I’m extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness.”

    Once the dataset is available, it will be free to download on the Lincoln Laboratory dataset website. More