More stories

  • in

    New AI tool generates realistic satellite images of future flooding

    Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.Generative adversarial imagesThe new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”Flood hallucinationsIn their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud. More

  • in

    Angela Belcher delivers 2023 Dresselhaus Lecture on evolving organisms for new nanomaterials

    “How do we get to making nanomaterials that haven’t been evolved before?” asked Angela Belcher at the 2023 Mildred S. Dresselhaus Lecture at MIT on Nov. 20. “We can use elements that biology has already given us.”

    The combined in-person and virtual audience of over 300 was treated to a light-up, 3D model of M13 bacteriophage, a virus that only infects bacteria, complete with a pull-out strand of DNA. Belcher used the feather-boa-like model to show how her research group modifies the M13’s genes to add new DNA and peptide sequences to template inorganic materials.

    “I love controlling materials at the nanoscale using biology,” said Belcher, the James Mason Crafts Professor of Biological Engineering, materials science professor, and of the Koch Institute of Integrative Cancer Research at MIT. “We all know if you control materials at the nanoscale and you can start to tune them, then you can have all kinds of different applications.” And the opportunities are indeed vast — from building batteries, fuel cells, and solar cells to carbon sequestration and storage, environmental remediation, catalysis, and medical diagnostics and imaging.

    Belcher sprinkled her talk with models and props, lined up on a table at the front of the 10-250 lecture hall, to demonstrate a wide variety of concepts and projects made possible by the intersection of biology and nanotechnology.

    Play video

    2023 Mildred S. Dresselhaus Lecture: Angela BelcherVideo: MIT.nano

    Energy storage and environment

    “How do you go from a DNA sequence to a functioning battery?” posed Belcher. Grabbing a model of a large carbon nanotube, she explained how her group engineered a phage to pick up carbon nanotubes that would wind all the way around the virus and then fill in with different cathode or anode materials to make nanowires for battery electrodes.

    How about using the M13 bacteriophage to improve the environment? Belcher referred to a project by former student Geran Zhang PhD ’19 that proved the virus can be modified for this context, too. He used the phage to template high-surface-area, carbon-based materials that can grab small molecules and break them down, Belcher said, opening a realm of possibilities from cleaning up rivers to developing chemical warfare agents to combating smog.

    Belcher’s lab worked with the U.S. Army to produce protective clothing and masks made of these carbon-based virus nanofibers. “We went from five liters in our lab to a thousand liters, then 10,000 liters in the army labs where we’re able to make kilograms of the material,” Belcher said, stressing the importance of being able to test and prototype at scale.

    Imaging tools and therapeutics in cancer

    In the area of biomedical imaging, Belcher explained, a lot less is known in near-infrared imaging — imaging in wavelengths above 1,000 nanometers — than other imaging techniques, yet with near-infrared scientists can see much deeper inside the body. Belcher’s lab built their own systems to image at these wavelengths. The third generation of this system provides real-time, sub-millimeter optical imaging for guided surgery.

    Working with Sangeeta Bhatia, the John J. and Dorothy Wilson Professor of Engineering, Belcher used carbon nanotubes to build imaging tools that find tiny tumors during surgery that doctors otherwise would not be able to see. The tool is actually a virus engineered to carry with it a fluorescent, single-walled carbon nanotube as it seeks out the tumors.

    Nearing the end of her talk, Belcher presented a goal: to develop an accessible detection and diagnostic technology for ovarian cancer in five to 10 years.

    “We think that we can do it,” Belcher said. She described her students’ work developing a way to scan an entire fallopian tube, as opposed to just one small portion, to find pre-cancer lesions, and talked about the team of MIT faculty, doctors, and researchers working collectively toward this goal.

    “Part of the secret of life and the meaning of life is helping other people enjoy the passage of time,” said Belcher in her closing remarks. “I think that we can all do that by working to solve some of the biggest issues on the planet, including helping to diagnose and treat ovarian cancer early so people have more time to spend with their family.”

    Honoring Mildred S. Dresselhaus

    Belcher was the fifth speaker to deliver the Dresselhaus Lecture, an annual event organized by MIT.nano to honor the late MIT physics and electrical engineering Institute Professor Mildred Dresselhaus. The lecture features a speaker from anywhere in the world whose leadership and impact echo Dresselhaus’s life, accomplishments, and values.

    “Millie was and is a huge hero of mine,” said Belcher. “Giving a lecture in Millie’s name is just the greatest honor.”

    Belcher dedicated the talk to Dresselhaus, whom she described with an array of accolades — a trailblazer, a genius, an amazing mentor, teacher, and inventor. “Just knowing her was such a privilege,” she said.

    Belcher also dedicated her talk to her own grandmother and mother, both of whom passed away from cancer, as well as late MIT professors Susan Lindquist and Angelika Amon, who both died of ovarian cancer.

    “I’ve been so fortunate to work with just the most talented and dedicated graduate students, undergraduate students, postdocs, and researchers,” concluded Belcher. “It has been a pure joy to be in partnership with all of you to solve these very daunting problems.” More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More

  • in

    A new dataset of Arctic images will spur artificial intelligence research

    As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

    “This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment,” says Jo Kurucar, a researcher in Lincoln Laboratory’s AI Software Architectures and Algorithms Group, which led this project.

    As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

    Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

    The Healy is the USCG’s largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

    “Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, we’ve established ties that enabled the deployment of the CRISP system,” says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. “We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter.”

    The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.  

    The camera is mounted at the front of the ship’s fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

    “The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality,” says Michael Emily, the project’s IT systems administrator who traveled to Seattle for the install. Working with the ship’s crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren’t in fact accessible. “We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare,” Emily says.

    The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

    The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. “Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own,” Kurucar adds. “It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset.”

    On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

    Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

    Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: “I’m extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness.”

    Once the dataset is available, it will be free to download on the Lincoln Laboratory dataset website. More

  • in

    Exploring the nanoworld of biogenic gems

    A new research collaboration with The Bahrain Institute for Pearls and Gemstones (DANAT) will seek to develop advanced characterization tools for the analysis of the properties of pearls and to explore technologies to assign unique identifiers to individual pearls.

    The three-year project will be led by Admir Mašić, associate professor of civil and environmental engineering, in collaboration with Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology and professor of electrical engineering and computer science.

    “Pearls are extremely complex and fascinating hierarchically ordered biological materials that are formed by a wide range of different species,” says Mašić. “Working with DANAT provides us a unique opportunity to apply our lab’s multi-scale materials characterization tools to identify potentially species-specific pearl fingerprints, while simultaneously addressing scientific research questions regarding the underlying biomineralization processes that could inform advances in sustainable building materials.”

    DANAT is a gemological laboratory specializing in the testing and study of natural pearls as a reflection of Bahrain’s pearling history and desire to protect and advance Bahrain’s pearling heritage. DANAT’s gemologists support clients and students through pearl, gemstone, and diamond identification services, as well as educational courses.

    Like many other precious gemstones, pearls have been human-made through scientific experimentation, says Noora Jamsheer, chief executive officer at DANAT. Over a century ago, cultured pearls entered markets as a competitive product to natural pearls, similar in appearance but different in value.

    “Gemological labs have been innovating scientific testing methods to differentiate between natural pearls and all other pearls that exist because of direct or indirect human intervention. Today the world knows natural pearls and cultured pearls. However, there are also pearls that fall in between these two categories,” says Jamsheer. “DANAT has the responsibility, as the leading gemological laboratory for pearl testing, to take the initiative necessary to ensure that testing methods keep pace with advances in the science of pearl cultivation.”

    Titled “Exploring the Nanoworld of Biogenic Gems,” the project will aim to improve the process of testing and identifying pearls by identifying morphological, micro-structural, optical, and chemical features sufficient to distinguish a pearl’s area of origin, method of growth, or both. MIT.nano, MIT’s open-access center for nanoscience and nanoengineering will be the organizational home for the project, where Mašić and his team will utilize the facility’s state-of-the-art characterization tools.

    In addition to discovering new methodologies for establishing a pearl’s origin, the project aims to utilize machine learning to automate pearl classification. Furthermore, researchers will investigate techniques to create a unique identifier associated with an individual pearl.

    The initial sponsored research project is expected to last three years, with potential for continued collaboration based on key findings or building upon the project’s success to open new avenues for research into the structure, properties, and growth of pearls. More