More stories

  • in

    Scientists build new atlas of ocean’s oxygen-starved waters

    Life is teeming nearly everywhere in the oceans, except in certain pockets where oxygen naturally plummets and waters become unlivable for most aerobic organisms. These desolate pools are “oxygen-deficient zones,” or ODZs. And though they make up less than 1 percent of the ocean’s total volume, they are a significant source of nitrous oxide, a potent greenhouse gas. Their boundaries can also limit the extent of fisheries and marine ecosystems.

    Now MIT scientists have generated the most detailed, three-dimensional “atlas” of the largest ODZs in the world. The new atlas provides high-resolution maps of the two major, oxygen-starved bodies of water in the tropical Pacific. These maps reveal the volume, extent, and varying depths of each ODZ, along with fine-scale features, such as ribbons of oxygenated water that intrude into otherwise depleted zones.

    The team used a new method to process over 40 years’ worth of ocean data, comprising nearly 15 million measurements taken by many research cruises and autonomous robots deployed across the tropical Pacific. The researchers compiled then analyzed this vast and fine-grained data to generate maps of oxygen-deficient zones at various depths, similar to the many slices of a three-dimensional scan.

    From these maps, the researchers estimated the total volume of the two major ODZs in the tropical Pacific, more precisely than previous efforts. The first zone, which stretches out from the coast of South America, measures about 600,000 cubic kilometers — roughly the volume of water that would fill 240 billion Olympic-sized pools. The second zone, off the coast of Central America, is roughly three times larger.

    The atlas serves as a reference for where ODZs lie today. The team hopes scientists can add to this atlas with continued measurements, to better track changes in these zones and predict how they may shift as the climate warms.

    “It’s broadly expected that the oceans will lose oxygen as the climate gets warmer. But the situation is more complicated in the tropics where there are large oxygen-deficient zones,” says Jarek Kwiecinski ’21, who developed the atlas along with Andrew Babbin, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It’s important to create a detailed map of these zones so we have a point of comparison for future change.”

    The team’s study appears today in the journal Global Biogeochemical Cycles.

    Airing out artifacts

    Oxygen-deficient zones are large, persistent regions of the ocean that occur naturally, as a consequence of marine microbes gobbling up sinking phytoplankton along with all the available oxygen in the surroundings. These zones happen to lie in regions that miss passing ocean currents, which would normally replenish regions with oxygenated water. As a result, ODZs are locations of relatively permanent, oxygen-depleted waters, and can exist at mid-ocean depths of between roughly 35 to 1,000 meters below the surface. For some perspective, the oceans on average run about 4,000 meters deep.

    Over the last 40 years, research cruises have explored these regions by dropping bottles down to various depths and hauling up seawater that scientists then measure for oxygen.

    “But there are a lot of artifacts that come from a bottle measurement when you’re trying to measure truly zero oxygen,” Babbin says. “All the plastic that we deploy at depth is full of oxygen that can leach out into the sample. When all is said and done, that artificial oxygen inflates the ocean’s true value.”

    Rather than rely on measurements from bottle samples, the team looked at data from sensors attached to the outside of the bottles or integrated with robotic platforms that can change their buoyancy to measure water at different depths. These sensors measure a variety of signals, including changes in electrical currents or the intensity of light emitted by a photosensitive dye to estimate the amount of oxygen dissolved in water. In contrast to seawater samples that represent a single discrete depth, the sensors record signals continuously as they descend through the water column.

    Scientists have attempted to use these sensor data to estimate the true value of oxygen concentrations in ODZs, but have found it incredibly tricky to convert these signals accurately, particularly at concentrations approaching zero.

    “We took a very different approach, using measurements not to look at their true value, but rather how that value changes within the water column,” Kwiecinski says. “That way we can identify anoxic waters, regardless of what a specific sensor says.”

    Bottoming out

    The team reasoned that, if sensors showed a constant, unchanging value of oxygen in a continuous, vertical section of the ocean, regardless of the true value, then it would likely be a sign that oxygen had bottomed out, and that the section was part of an oxygen-deficient zone.

    The researchers brought together nearly 15 million sensor measurements collected over 40 years by various research cruises and robotic floats, and mapped the regions where oxygen did not change with depth.

    “We can now see how the distribution of anoxic water in the Pacific changes in three dimensions,” Babbin says. 

    The team mapped the boundaries, volume, and shape of two major ODZs in the tropical Pacific, one in the Northern Hemisphere, and the other in the Southern Hemisphere. They were also able to see fine details within each zone. For instance, oxygen-depleted waters are “thicker,” or more concentrated towards the middle, and appear to thin out toward the edges of each zone.

    “We could also see gaps, where it looks like big bites were taken out of anoxic waters at shallow depths,” Babbin says. “There’s some mechanism bringing oxygen into this region, making it oxygenated compared to the water around it.”

    Such observations of the tropical Pacific’s oxygen-deficient zones are more detailed than what’s been measured to date.

    “How the borders of these ODZs are shaped, and how far they extend, could not be previously resolved,” Babbin says. “Now we have a better idea of how these two zones compare in terms of areal extent and depth.”

    “This gives you a sketch of what could be happening,” Kwiecinski says. “There’s a lot more one can do with this data compilation to understand how the ocean’s oxygen supply is controlled.”

    This research is supported, in part, by the Simons Foundation. More

  • in

    Q&A: Options for the Diablo Canyon nuclear plant

    The Diablo Canyon nuclear plant in California, the only one still operating in the state, is set to close in 2025. A team of researchers at MIT’s Center for Advanced Nuclear Energy Systems, Abdul Latif Jameel Water and Food Systems Lab, and Center for Energy and Environmental Policy Research; Stanford’s Precourt Energy Institute; and energy analysis firm LucidCatalyst LLC have analyzed the potential benefits the plant could provide if its operation were extended to 2030 or 2045.

    They found that this nuclear plant could simultaneously help to stabilize the state’s electric grid, provide desalinated water to supplement the state’s chronic water shortages, and provide carbon-free hydrogen fuel for transportation. MIT News asked report co-authors Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering, and John Lienhard, the Jameel Professor of Water and Food, to discuss the group’s findings.

    Q: Your report suggests co-locating a major desalination plant alongside the existing Diablo Canyon power plant. What would be the potential benefits from operating a desalination plant in conjunction with the power plant?

    Lienhard: The cost of desalinated water produced at Diablo Canyon would be lower than for a stand-alone plant because the cost of electricity would be significantly lower and you could take advantage of the existing infrastructure for the intake of seawater and the outfall of brine. Electricity would be cheaper because the location takes advantage of Diablo Canyon’s unique capability to provide low cost, zero-carbon baseload power.

    Depending on the scale at which the desalination plant is built, you could make a very significant impact on the water shortfalls of state and federal projects in the area. In fact, one of the numbers that came out of this study was that an intermediate-sized desalination plant there would produce more fresh water than the highest estimate of the net yield from the proposed Delta Conveyance Project on the Sacramento River. You could get that amount of water at Diablo Canyon for an investment cost less than half as large, and without the associated impacts that would come with the Delta Conveyance Project.

    And the technology envisioned for desalination here, reverse osmosis, is available off the shelf. You can buy this equipment today. In fact, it’s already in use in California and thousands of other places around the world.

    Q: You discuss in the report three potential products from the Diablo Canyon plant:  desalinatinated water, power for the grid, and clean hydrogen. How well can the plant accommodate all of those efforts, and are there advantages to combining them as opposed to doing any one of them separately?

    Buongiorno: California, like many other regions in the world, is facing multiple challenges as it seeks to reduce carbon emissions on a grand scale. First, the wide deployment of intermittent energy sources such as solar and wind creates a great deal of variability on the grid that can be balanced by dispatchable firm power generators like Diablo. So, the first mission for Diablo is to continue to provide reliable, clean electricity to the grid.

    The second challenge is the prolonged drought and water scarcity for the state in general. And one way to address that is water desalination co-located with the nuclear plant at the Diablo site, as John explained.

    The third challenge is related to decarbonization the transportation sector. A possible approach is replacing conventional cars and trucks with vehicles powered by fuel cells which consume hydrogen. Hydrogen has to be produced from a primary energy source. Nuclear power, through a process called electrolysis, can do that quite efficiently and in a manner that is carbon-free.

    Our economic analysis took into account the expected revenue from selling these multiple products — electricity for the grid, hydrogen for the transportation sector, water for farmers or other local users — as well as the costs associated with deploying the new facilities needed to produce desalinated water and hydrogen. We found that, if Diablo’s operating license was extended until 2035, it would cut carbon emissions by an average of 7 million metric tons a year — a more than 11 percent reduction from 2017 levels — and save ratepayers $2.6 billion in power system costs.

    Further delaying the retirement of Diablo to 2045 would spare 90,000 acres of land that would need to be dedicated to renewable energy production to replace the facility’s capacity, and it would save ratepayers up to $21 billion in power system costs.

    Finally, if Diablo was operated as a polygeneration facility that provides electricity, desalinated water, and hydrogen simultaneously, its value, quantified in terms of dollars per unit electricity generated, could increase by 50 percent.

    Lienhard: Most of the desalination scenarios that we considered did not consume the full electrical output of that plant, meaning that under most scenarios you would continue to make electricity and do something with it, beyond just desalination. I think it’s also important to remember that this power plant produces 15 percent of California’s carbon-free electricity today and is responsible for 8 percent of the state’s total electrical production. In other words, Diablo Canyon is a very large factor in California’s decarbonization. When or if this plant goes offline, the near-term outcome is likely to be increased reliance on natural gas to produce electricity, meaning a rise in California’s carbon emissions.

    Q: This plant in particular has been highly controversial since its inception. What’s your assessment of the plant’s safety beyond its scheduled shutdown, and how do you see this report as contributing to the decision-making about that shutdown?

    Buongiorno: The Diablo Canyon Nuclear Power Plant has a very strong safety record. The potential safety concern for Diablo is related to its proximity to several fault lines. Being located in California, the plant was designed to withstand large earthquakes to begin with. Following the Fukushima accident in 2011, the Nuclear Regulatory Commission reviewed the plant’s ability to withstand external events (e.g., earthquakes, tsunamis, floods, tornadoes, wildfires, hurricanes) of exceptionally rare and severe magnitude. After nine years of assessment the NRC’s conclusion is that “existing seismic capacity or effective flood protection [at Diablo Canyon] will address the unbounded reevaluated hazards.” That is, Diablo was designed and built to withstand even the rarest and strongest earthquakes that are physically possible at this site.

    As an additional level of protection, the plant has been retrofitted with special equipment and procedures meant to ensure reliable cooling of the reactor core and spent fuel pool under a hypothetical scenario in which all design-basis safety systems have been disabled by a severe external event.

    Lienhard: As for the potential impact of this report, PG&E [the California utility] has already made the decision to shut down the plant, and we and others hope that decision will be revisited and reversed. We believe that this report gives the relevant stakeholders and policymakers a lot of information about options and value associated with keeping the plant running, and about how California could benefit from clean water and clean power generated at Diablo Canyon. It’s not up to us to make the decision, of course — that is a decision that must be made by the people of California. All we can do is provide information.

    Q: What are the biggest challenges or obstacles to seeing these ideas implemented?

    Lienhard: California has very strict environmental protection regulations, and it’s good that they do. One of the areas of great concern to California is the health of the ocean and protection of the coastal ecosystem. As a result, very strict rules are in place about the intake and outfall of both power plants and desalination plants, to protect marine life. Our analysis suggests that this combined plant can be implemented within the parameters prescribed by the California Ocean Plan and that it can meet the regulatory requirements.

    We believe that deeper analysis would be needed before you could proceed. You would need to do site studies and really get out into the water and look in detail at what’s there. But the preliminary analysis is positive. A second challenge is that the discourse in California around nuclear power has generally not been very supportive, and similarly some groups in California oppose desalination. We expect that that both of those points of view would be part of the conversation about whether or not to procede with this project.

    Q: How particular is this analysis to the specifics of this location? Are there aspects of it that apply to other nuclear plants, domestically or globally?

    Lienhard: Hundreds of nuclear plants around the world are situated along the coast, and many are in water stressed regions. Although our analysis focused on Diablo Canyon, we believe that the general findings are applicable to many other seaside nuclear plants, so that this approach and these conclusions could potentially be applied at hundreds of sites worldwide. More

  • in

    Saving seaweed with machine learning

    Last year, Charlene Xia ’17, SM ’20 found herself at a crossroads. She was finishing up her master’s degree in media arts and sciences from the MIT Media Lab and had just submitted applications to doctoral degree programs. All Xia could do was sit and wait. In the meantime, she narrowed down her career options, regardless of whether she was accepted to any program.

    “I had two thoughts: I’m either going to get a PhD to work on a project that protects our planet, or I’m going to start a restaurant,” recalls Xia.

    Xia poured over her extensive cookbook collection, researching international cuisines as she anxiously awaited word about her graduate school applications. She even looked into the cost of a food truck permit in the Boston area. Just as she started hatching plans to open a plant-based skewer restaurant, Xia received word that she had been accepted into the mechanical engineering graduate program at MIT.

    Shortly after starting her doctoral studies, Xia’s advisor, Professor David Wallace, approached her with an interesting opportunity. MathWorks, a software company known for developing the MATLAB computing platform, had announced a new seed funding program in MIT’s Department of Mechanical Engineering. The program encouraged collaborative research projects focused on the health of the planet.

    “I saw this as a super-fun opportunity to combine my passion for food, my technical expertise in ocean engineering, and my interest in sustainably helping our planet,” says Xia.

    Play video

    From MIT Mechanical Engineering: “Saving Seaweed with Machine Learning”

    Wallace knew Xia would be up to the task of taking an interdisciplinary approach to solve an issue related to the health of the planet. “Charlene is a remarkable student with extraordinary talent and deep thoughtfulness. She is pretty much fearless, embracing challenges in almost any domain with the well-founded belief that, with effort, she will become a master,” says Wallace.

    Alongside Wallace and Associate Professor Stefanie Mueller, Xia proposed a project to predict and prevent the spread of diseases in aquaculture. The team focused on seaweed farms in particular.

    Already popular in East Asian cuisines, seaweed holds tremendous potential as a sustainable food source for the world’s ever-growing population. In addition to its nutritive value, seaweed combats various environmental threats. It helps fight climate change by absorbing excess carbon dioxide in the atmosphere, and can also absorb fertilizer run-off, keeping coasts cleaner.

    As with so much of marine life, seaweed is threatened by the very thing it helps mitigate against: climate change. Climate stressors like warm temperatures or minimal sunlight encourage the growth of harmful bacteria such as ice-ice disease. Within days, entire seaweed farms are decimated by unchecked bacterial growth.

    To solve this problem, Xia turned to the microbiota present in these seaweed farms as a predictive indicator of any threat to the seaweed or livestock. “Our project is to develop a low-cost device that can detect and prevent diseases before they affect seaweed or livestock by monitoring the microbiome of the environment,” says Xia.

    The team pairs old technology with the latest in computing. Using a submersible digital holographic microscope, they take a 2D image. They then use a machine learning system known as a neural network to convert the 2D image into a representation of the microbiome present in the 3D environment.

    “Using a machine learning network, you can take a 2D image and reconstruct it almost in real time to get an idea of what the microbiome looks like in a 3D space,” says Xia.

    The software can be run in a small Raspberry Pi that could be attached to the holographic microscope. To figure out how to communicate these data back to the research team, Xia drew upon her master’s degree research.

    In that work, under the guidance of Professor Allan Adams and Professor Joseph Paradiso in the Media Lab, Xia focused on developing small underwater communication devices that can relay data about the ocean back to researchers. Rather than the usual $4,000, these devices were designed to cost less than $100, helping lower the cost barrier for those interested in uncovering the many mysteries of our oceans. The communication devices can be used to relay data about the ocean environment from the machine learning algorithms.

    By combining these low-cost communication devices along with microscopic images and machine learning, Xia hopes to design a low-cost, real-time monitoring system that can be scaled to cover entire seaweed farms.

    “It’s almost like having the ‘internet of things’ underwater,” adds Xia. “I’m developing this whole underwater camera system alongside the wireless communication I developed that can give me the data while I’m sitting on dry land.”

    Armed with these data about the microbiome, Xia and her team can detect whether or not a disease is about to strike and jeopardize seaweed or livestock before it is too late.

    While Xia still daydreams about opening a restaurant, she hopes the seaweed project will prompt people to rethink how they consider food production in general.

    “We should think about farming and food production in terms of the entire ecosystem,” she says. “My meta-goal for this project would be to get people to think about food production in a more holistic and natural way.” More

  • in

    How diet affects tumors

    In recent years, there has been some evidence that dietary interventions can help to slow the growth of tumors. A new study from MIT, which analyzed two different diets in mice, reveals how those diets affect cancer cells, and offers an explanation for why restricting calories may slow tumor growth.

    The study examined the effects of a calorically restricted diet and a ketogenic diet in mice with pancreatic tumors. While both of these diets reduce the amount of sugar available to tumors, the researchers found that only the calorically restricted diet reduced the availability of fatty acids, and this was linked to a slowdown in tumor growth.

    The findings do not suggest that cancer patients should try to follow either of these diets, the researchers say. Instead, they believe the findings warrant further study to determine how dietary interventions might be combined with existing or emerging drugs to help patients with cancer.

    “There’s a lot of evidence that diet can affect how fast your cancer progresses, but this is not a cure,” says Matthew Vander Heiden, director of MIT’s Koch Institute for Integrative Cancer Research and the senior author of the study. “While the findings are provocative, further study is needed, and individual patients should talk to their doctor about the right dietary interventions for their cancer.”

    MIT postdoc Evan Lien is the lead author of the paper, which appears today in Nature.

    Metabolic mechanism

    Vander Heiden, who is also a medical oncologist at Dana-Farber Cancer Institute, says his patients often ask him about the potential benefits of various diets, but there is not enough scientific evidence available to offer any definitive advice. Many of the dietary questions that patients have focus on either a calorie-restricted diet, which reduces calorie consumption by 25 to 50 percent, or a ketogenic diet, which is low in carbohydrates and high in fat and protein.

    Previous studies have suggested that a calorically restricted diet might slow tumor growth in some contexts, and such a diet has been shown to extend lifespan in mice and many other animal species. A smaller number of studies exploring the effects of a ketogenic diet on cancer have produced inconclusive results.

    “A lot of the advice or cultural fads that are out there aren’t necessarily always based on very good science,” Lien says. “It seemed like there was an opportunity, especially with our understanding of cancer metabolism having evolved so much over the past 10 years or so, that we could take some of the biochemical principles that we’ve learned and apply those concepts to understanding this complex question.”

    Cancer cells consume a great deal of glucose, so some scientists had hypothesized that either the ketogenic diet or calorie restriction might slow tumor growth by reducing the amount of glucose available. However, the MIT team’s initial experiments in mice with pancreatic tumors showed that calorie restriction has a much greater effect on tumor growth than the ketogenic diet, so the researchers suspected that glucose levels were not playing a major role in the slowdown.

    To dig deeper into the mechanism, the researchers analyzed tumor growth and nutrient concentration in mice with pancreatic tumors, which were fed either a normal, ketogenic, or calorie-restricted diet. In both the ketogenic and calorie-restricted mice, glucose levels went down. In the calorie-restricted mice, lipid levels also went down, but in mice on the ketogenic diet, they went up.

    Lipid shortages impair tumor growth because cancer cells need lipids to construct their cell membranes. Normally, when lipids aren’t available in a tissue, cells can make their own. As part of this process, they need to maintain the right balance of saturated and unsaturated fatty acids, which requires an enzyme called stearoyl-CoA desaturase (SCD). This enzyme is responsible for converting saturated fatty acids into unsaturated fatty acids.

    Both calorie-restricted and ketogenic diets reduce SCD activity, but mice on the ketogenic diet had lipids available to them from their diet, so they didn’t need to use SCD. Mice on the calorie-restricted diet, however, couldn’t get fatty acids from their diet or produce their own. In these mice, tumor growth slowed significantly, compared to mice on the ketogenic diet.

    “Not only does caloric restriction starve tumors of lipids, it also impairs the process that allows them to adapt to it. That combination is really contributing to the inhibition of tumor growth,” Lien says.

    Dietary effects

    In addition to their mouse research, the researchers also looked at some human data. Working with Brian Wolpin, an oncologist at Dana-Farber Cancer Institute and an author of the paper, the team obtained data from a large cohort study that allowed them to analyze the relationship between dietary patterns and survival times in pancreatic cancer patients. From that study, the researchers found that the type of fat consumed appears to influence how patients on a low-sugar diet fare after a pancreatic cancer diagnosis, although the data are not complete enough to draw any conclusions about the effect of diet, the researchers say.

    Although this study showed that calorie restriction has beneficial effects in mice, the researchers say they do not recommend that cancer patients follow a calorie-restricted diet, which is difficult to maintain and can have harmful side effects. However, they believe that cancer cells’ dependence on the availability of unsaturated fatty acids could be exploited to develop drugs that might help slow tumor growth.

    One possible therapeutic strategy could be inhibition of the SCD enzyme, which would cut off tumor cells’ ability to produce unsaturated fatty acids.

    “The purpose of these studies isn’t necessarily to recommend a diet, but it’s to really understand the underlying biology,” Lien says. “They provide some sense of the mechanisms of how these diets work, and that can lead to rational ideas on how we might mimic those situations for cancer therapy.”

    The researchers now plan to study how diets with a variety of fat sources — including plant or animal-based fats with defined differences in saturated, monounsaturated, and polyunsaturated fatty acid content — alter tumor fatty acid metabolism and the ratio of unsaturated to saturated fatty acids.

    The research was funded by the Damon Runyon Cancer Research Foundation, the National Institutes of Health, the Lustgarten Foundation, the Dana-Farber Cancer Institute Hale Family Center for Pancreatic Cancer Research, Stand Up to Cancer, the Pancreatic Cancer Action Network, the Noble Effort Fund, the Wexler Family Fund, Promises for Purple, the Bob Parsons Fund, the Emerald Foundation, the Howard Hughes Medical Institute, the MIT Center for Precision Cancer Medicine, and the Ludwig Center at MIT. More

  • in

    Rover images confirm Jezero crater is an ancient Martian lake

    The first scientific analysis of images taken by NASA’s Perseverance rover has now confirmed that Mars’ Jezero crater — which today is a dry, wind-eroded depression — was once a quiet lake, fed steadily by a small river some 3.7 billion years ago.

    The images also reveal evidence that the crater endured flash floods. This flooding was energetic enough to sweep up large boulders from tens of miles upstream and deposit them into the lakebed, where the massive rocks lie today.

    The new analysis, published today in the journal Science, is based on images of the outcropping rocks inside the crater on its western side. Satellites had previously shown that this outcrop, seen from above, resembled river deltas on Earth, where layers of sediment are deposited in the shape of a fan as the river feeds into a lake.

    Perseverance’s new images, taken from inside the crater, confirm that this outcrop was indeed a river delta. Based on the sedimentary layers in the outcrop, it appears that the river delta fed into a lake that was calm for much of its existence, until a dramatic shift in climate triggered episodic flooding at or toward the end of the lake’s history.

    “If you look at these images, you’re basically staring at this epic desert landscape. It’s the most forlorn place you could ever visit,” says Benjamin Weiss, professor of planetary sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences and a member of the analysis team. “There’s not a drop of water anywhere, and yet, here we have evidence of a very different past. Something very profound happened in the planet’s history.”

    As the rover explores the crater, scientists hope to uncover more clues to its climatic evolution. Now that they have confirmed the crater was once a lake environment, they believe its sediments could hold traces of ancient aqueous life. In its mission going forward, Perseverance will look for locations to collect and preserve sediments. These samples will eventually be returned to Earth, where scientists can probe them for Martian biosignatures.

    “We now have the opportunity to look for fossils,” says team member Tanja Bosak, associate professor of geobiology at MIT. “It will take some time to get to the rocks that we really hope to sample for signs of life. So, it’s a marathon, with a lot of potential.”

    Tilted beds

    On Feb. 18, 2021, the Perseverance rover landed on the floor of Jezero crater, a little more than a mile away from its western fan-shaped outcrop. In the first three months, the vehicle remained stationary as NASA engineers performed remote checks of the rover’s many instruments.

    During this time, two of Perseverance’s cameras, Mastcam-Z and the SuperCam Remote Micro-Imager (RMI), captured images of their surroundings, including long-distance photos of the outcrop’s edge and a formation known as Kodiak butte, a smaller outcop that planetary geologists surmise may have once been connected to the main fan-shaped outcrop but has since partially eroded.

    Once the rover downlinked images to Earth, NASA’s Perseverance science team processed and combined the images, and were able to observe distinct beds of sediment along Kodiak butte in surprisingly high resolution. The researchers measured each layer’s thickness, slope, and lateral extent, finding that the sediment must have been deposited by flowing water into a lake, rather than by wind, sheet-like floods, or other geologic processes.

    The rover also captured similar tilted sediment beds along the main outcrop. These images, together with those of Kodiak, confirm that the fan-shaped formation was indeed an ancient delta and that this delta fed into an ancient Martian lake.

    “Without driving anywhere, the rover was able to solve one of the big unknowns, which was that this crater was once a lake,” Weiss says. “Until we actually landed there and confirmed it was a lake, it was always a question.”

    Boulder flow

    When the researchers took a closer look at images of the main outcrop, they noticed large boulders and cobbles embedded in the youngest, topmost layers of the delta. Some boulders measured as wide as 1 meter across, and were estimated to weigh up to several tons. These massive rocks, the team concluded, must have come from outside the crater, and was likely part of bedrock located on the crater rim or else 40 or more miles upstream.

    Judging from their current location and dimensions, the team says the boulders were carried downstream and into the lakebed by a flash-flood that flowed up to 9 meters per second and moved up to 3,000 cubic meters of water per second.

    “You need energetic flood conditions to carry rocks that big and heavy,” Weiss says. “It’s a special thing that may be indicative of a fundamental change in the local hydrology or perhaps the regional climate on Mars.”

    Because the huge rocks lie in the upper layers of the delta, they represent the most recently deposited material. The boulders sit atop layers of older, much finer sediment. This stratification, the researchers say, indicates that for much of its existence, the ancient lake was filled by a gently flowing river. Fine sediments — and possibly organic material — drifted down the river, and settled into a gradual, sloping delta.

    However, the crater later experienced sudden flash floods that deposited large boulders onto the delta. Once the lake dried up, and over billions of years wind eroded the landscape, leaving the crater we see today.

    The cause of this climate turnaround is unknown, although Weiss says the delta’s boulders may hold some answers.

    “The most surprising thing that’s come out of these images is the potential opportunity to catch the time when this crater transitioned from an Earth-like habitable environment, to this desolate landscape wasteland we see now,” he says. “These boulder beds may be records of this transition, and we haven’t seen this in other places on Mars.”

    This research was supported, in part, by NASA. More

  • in

    A robot that finds lost items

    A busy commuter is ready to walk out the door, only to realize they’ve misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys.

    Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

    The RFusion prototype the researchers developed relies on RFID tags, which are cheap, battery-less tags that can be stuck to an item and reflect signals sent by an antenna. Because RF signals can travel through most surfaces (like the mound of dirty laundry that may be obscuring the keys), RFusion is able to locate a tagged item within a pile.

    Using machine learning, the robotic arm automatically zeroes-in on the object’s exact location, moves the items on top of it, grasps the object, and verifies that it picked up the right thing. The camera, antenna, robotic arm, and AI are fully integrated, so RFusion can work in any environment without requiring a special set up.

    While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn’t quite fast enough yet for these uses.

    “This idea of being able to find items in a chaotic world is an open problem that we’ve been working on for a few years. Having robots that are able to search for things under a pile is a growing need in industry today. Right now, you can think of this as a Roomba on steroids, but in the near term, this could have a lot of applications in manufacturing and warehouse environments,” said senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

    Co-authors include research assistant Tara Boroushaki, the lead author; electrical engineering and computer science graduate student Isaac Perper; research associate Mergen Nachin; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. The research will be presented at the Association for Computing Machinery Conference on Embedded Networked Senor Systems next month.

    Play video

    Sending signals

    RFusion begins searching for an object using its antenna, which bounces signals off the RFID tag (like sunlight being reflected off a mirror) to identify a spherical area in which the tag is located. It combines that sphere with the camera input, which narrows down the object’s location. For instance, the item can’t be located on an area of a table that is empty.

    But once the robot has a general idea of where the item is, it would need to swing its arm widely around the room taking additional measurements to come up with the exact location, which is slow and inefficient.

    The researchers used reinforcement learning to train a neural network that can optimize the robot’s trajectory to the object. In reinforcement learning, the algorithm is trained through trial and error with a reward system.

    “This is also how our brain learns. We get rewarded from our teachers, from our parents, from a computer game, etc. The same thing happens in reinforcement learning. We let the agent make mistakes or do something right and then we punish or reward the network. This is how the network learns something that is really hard for it to model,” Boroushaki explains.

    In the case of RFusion, the optimization algorithm was rewarded when it limited the number of moves it had to make to localize the item and the distance it had to travel to pick it up.

    Once the system identifies the exact right spot, the neural network uses combined RF and visual information to predict how the robotic arm should grasp the object, including the angle of the hand and the width of the gripper, and whether it must remove other items first. It also scans the item’s tag one last time to make sure it picked up the right object.

    Cutting through clutter

    The researchers tested RFusion in several different environments. They buried a keychain in a box full of clutter and hid a remote control under a pile of items on a couch.

    But if they fed all the camera data and RF measurements to the reinforcement learning algorithm, it would have overwhelmed the system. So, drawing on the method a GPS uses to consolidate data from satellites, they summarized the RF measurements and limited the visual data to the area right in front of the robot.

    Their approach worked well — RFusion had a 96 percent success rate when retrieving objects that were fully hidden under a pile.

    “Sometimes, if you only rely on RF measurements, there is going to be an outlier, and if you rely only on vision, there is sometimes going to be a mistake from the camera. But if you combine them, they are going to correct each other. That is what made the system so robust,” Boroushaki says.

    In the future, the researchers hope to increase the speed of the system so it can move smoothly, rather than stopping periodically to take measurements. This would enable RFusion to be deployed in a fast-paced manufacturing or warehouse setting.

    Beyond its potential industrial uses, a system like this could even be incorporated into future smart homes to assist people with any number of household tasks, Boroushaki says.

    “Every year, billions of RFID tags are used to identify objects in today’s complex supply chains, including clothing and lots of other consumer goods. The RFusion approach points the way to autonomous robots that can dig through a pile of mixed items and sort them out using the data stored in the RFID tags, much more efficiently than having to inspect each item individually, especially when the items look similar to a computer vision system,” says Matthew S. Reynolds, CoMotion Presidential Innovation Fellow and associate professor of electrical and computer engineering at the University of Washington, who was not involved in the research. “The RFusion approach is a great step forward for robotics operating in complex supply chains where identifying and ‘picking’ the right item quickly and accurately is the key to getting orders fulfilled on time and keeping demanding customers happy.”

    The research is sponsored by the National Science Foundation, a Sloan Research Fellowship, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab. More

  • in

    Researchers design sensors to rapidly detect plant hormones

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their local collaborators from Temasek Life Sciences Laboratory (TLL) and Nanyang Technological University (NTU), have developed the first-ever nanosensor to enable rapid testing of synthetic auxin plant hormones. The novel nanosensors are safer and less tedious than existing techniques for testing plants’ response to compounds such as herbicide, and can be transformative in improving agricultural production and our understanding of plant growth.

    The scientists designed sensors for two plant hormones — 1-naphthalene acetic acid (NAA) and 2,4-dichlorophenoxyacetic acid (2,4-D) — which are used extensively in the farming industry for regulating plant growth and as herbicides, respectively. Current methods to detect NAA and 2,4-D cause damage to plants, and are unable to provide real-time in vivo monitoring and information.

    Based on the concept of corona phase molecular recognition (​​CoPhMoRe) pioneered by the Strano Lab at SMART DiSTAP and MIT, the new sensors are able to detect the presence of NAA and 2,4-D in living plants at a swift pace, providing plant information in real-time, without causing any harm. The team has successfully tested both sensors on a number of everyday crops including pak choi, spinach, and rice across various planting mediums such as soil, hydroponic, and plant tissue culture.

    Explained in a paper titled “Nanosensor Detection of Synthetic Auxins In Planta using Corona Phase Molecular Recognition” published in the journal ACS Sensors, the research can facilitate more efficient use of synthetic auxins in agriculture and hold tremendous potential to advance plant biology study.

    “Our CoPhMoRe technique has previously been used to detect compounds such as hydrogen peroxide and heavy-metal pollutants like arsenic — but this is the first successful case of CoPhMoRe sensors developed for detecting plant phytohormones that regulate plant growth and physiology, such as sprays to prevent premature flowering and dropping of fruits,” says DiSTAP co-lead principal investigator Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “This technology can replace current state-of-the-art sensing methods which are laborious, destructive, and unsafe.”

    Of the two sensors developed by the research team, the 2,4-D nanosensor also showed the ability to detect herbicide susceptibility, enabling farmers and agricultural scientists to quickly find out how vulnerable or resistant different plants are to herbicides without the need to monitor crop or weed growth over days. “This could be incredibly beneficial in revealing the mechanism behind how 2,4-D works within plants and why crops develop herbicide resistance,” says DiSTAP and TLL Principal Investigator Rajani Sarojam.

    “Our research can help the industry gain a better understanding of plant growth dynamics and has the potential to completely change how the industry screens for herbicide resistance, eliminating the need to monitor crop or weed growth over days,” says Mervin Chun-Yi Ang, a research scientist at DiSTAP. “It can be applied across a variety of plant species and planting mediums, and could easily be used in commercial setups for rapid herbicide susceptibility testing, such as urban farms.”

    NTU Professor Mary Chan-Park Bee Eng says, “Using nanosensors for in planta detection eliminates the need for extensive extraction and purification processes, which saves time and money. They also use very low-cost electronics, which makes them easily adaptable for commercial setups.”

    The team says their research can lead to future development of real-time nanosensors for other dynamic plant hormones and metabolites in living plants as well.

    The development of the nanosensor, optical detection system, and image processing algorithms for this study was done by SMART, NTU, and MIT, while TLL validated the nanosensors and provided knowledge of plant biology and plant signaling mechanisms. The research is carried out by SMART and supported by NRF under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    DiSTAP is one of the five interdisciplinary research roups in SMART. The DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biosynthetic technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients.

    Scientists from MIT, TTL, NTU, and National University of Singapore (NUS) are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, drought, and pathogen resistance and biosynthesis of high-value commercial products; developing tools for producing hydrophobic food components in industry-relevant microbes; developing novel microbial and enzymatic technologies to produce volatile organic compounds that can protect and/or promote growth of leafy vegetables; and applying these technologies to improve urban farming.

    DiSTAP is led by Michael Strano and Singapore co-lead principal investigator Professor Chua Nam Hai.

    SMART was established by MIT, in partnership with the NRF, in 2007. SMART, the first entity in CREATE, serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both. SMART currently comprises an Innovation Center and five interdisciplinary research groups: Antimicrobial Resistance (AMR), Critical Analytics for Manufacturing Personalized-Medicine (CAMP), DiSTAP, Future Urban Mobility (FM), and Low Energy Electronic Systems (LEES). SMART is funded by the NRF. More

  • in

    A new way to detect the SARS-CoV-2 Alpha variant in wastewater

    Researchers from the Antimicrobial Resistance (AMR) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, alongside collaborators from Biobot Analytics, Nanyang Technological University (NTU), and MIT, have successfully developed an innovative, open-source molecular detection method that is able to detect and quantify the B.1.1.7 (Alpha) variant of SARS-CoV-2. The breakthrough paves the way for rapid, inexpensive surveillance of other SARS-CoV-2 variants in wastewater.

    As the world continues to battle and contain Covid-19, the recent identification of SARS-CoV-2 variants with higher transmissibility and increased severity has made developing convenient variant tracking methods essential. Currently, identified variants include the B.1.17 (Alpha) variant first identified in the United Kingdom and the B.1.617.2 (Delta) variant first detected in India.

    Wastewater surveillance has emerged as a critical public health tool to safely and efficiently track the SARS-CoV-2 pandemic in a non-intrusive manner, providing complementary information that enables health authorities to acquire actionable community-level information. Most recently, viral fragments of SARS-CoV-2 were detected in housing estates in Singapore through a proactive wastewater surveillance program. This information, alongside surveillance testing, allowed Singapore’s Ministry of Health to swiftly respond, isolate, and conduct swab tests as part of precautionary measures.

    However, detecting variants through wastewater surveillance is less commonplace due to challenges in existing technology. Next-generation sequencing for wastewater surveillance is time-consuming and expensive. Tests also lack the sensitivity required to detect low variant abundances in dilute and mixed wastewater samples due to inconsistent and/or low sequencing coverage.

    The method developed by the researchers is uniquely tailored to address these challenges and expands the utility of wastewater surveillance beyond testing for SARS-CoV-2, toward tracking the spread of SARS-CoV-2 variants of concern.

    Wei Lin Lee, research scientist at SMART AMR and first author on the paper adds, “This is especially important in countries battling SARS-CoV-2 variants. Wastewater surveillance will help find out the true proportion and spread of the variants in the local communities. Our method is sensitive enough to detect variants in highly diluted SARS-CoV-2 concentrations typically seen in wastewater samples, and produces reliable results even for samples which contain multiple SARS-CoV-2 lineages.”

    Led by Janelle Thompson, NTU associate professor, and Eric Alm, MIT professor and SMART AMR principal investigator, the team’s study, “Quantitative SARS-CoV-2 Alpha variant B.1.1.7 Tracking in Wastewater by Allele-Specific RT-qPCR” has been published in Environmental Science & Technology Letters. The research explains the innovative, open-source molecular detection method based on allele-specific RT-qPCR that detects and quantifies the B.1.1.7 (Alpha) variant. The developed assay, tested and validated in wastewater samples across 19 communities in the United States, is able to reliably detect and quantify low levels of the B.1.1.7 (Alpha) variant with low cross-reactivity, and at variant proportions down to 1 percent in a background of mixed SARS-CoV-2 viruses.

    Targeting spike protein mutations that are highly predictive of the B.1.1.7 (Alpha) variant, the method can be implemented using commercially available RT-qPCR protocols. Unlike commercially available products that use proprietary primers and probes for wastewater surveillance, the paper details the open-source method and its development that can be freely used by other organizations and research institutes for their work on wastewater surveillance of SARS-CoV-2 and its variants.

    The breakthrough by the research team in Singapore is currently used by Biobot Analytics, an MIT startup and global leader in wastewater epidemiology headquartered in Cambridge, Massachusetts, serving states and localities throughout the United States. Using the method, Biobot Analytics is able to accept and analyze wastewater samples for the B.1.1.7 (Alpha) variant and plans to add additional variants to its analysis as methods are developed. For example, the SMART AMR team is currently developing specific assays that will be able to detect and quantify the B.1.617.2 (Delta) variant, which has recently been identified as a variant of concern by the World Health Organization.

    “Using the team’s innovative method, we have been able to monitor the B.1.1.7 (Alpha) variant in local populations in the U.S. — empowering leaders with information about Covid-19 trends in their communities and allowing them to make considered recommendations and changes to control measures,” says Mariana Matus PhD ’18, Biobot Analytics CEO and co-founder.

    “This method can be rapidly adapted to detect new variants of concern beyond B.1.1.7,” adds MIT’s Alm. “Our partnership with Biobot Analytics has translated our research into real-world impact beyond the shores of Singapore and aid in the detection of Covid-19 and its variants, serving as an early warning system and guidance for policymakers as they trace infection clusters and consider suitable public health measures.”

    The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in CREATE developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

    The AMR interdisciplinary research group is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, AMR aims to develop multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, its goal is to provide transformative, holistic solutions for Singapore and the world. More