More stories

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    More sensitive X-ray imaging

    Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.

    Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.

    Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.

    The findings are described today in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.

    While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.

    To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter).

    “The key to what we’re doing is a general theory and framework we have developed,” Rivera says. This allows the researchers to calculate the scintillation levels that would be produced by any arbitrary configuration of nanophotonic structures. The scintillation process itself involves a series of steps, making it complicated to unravel. The framework the team developed involves integrating three different types of physics, Roques-Carmes says. Using this system they have found a good match between their predictions and the results of their subsequent experiments.

    The experiments showed a tenfold improvement in emission from the treated scintillator. “So, this is something that might translate into applications for medical imaging, which are optical photon-starved, meaning the conversion of X-rays to optical light limits the image quality. [In medical imaging,] you do not want to irradiate your patients with too much of the X-rays, especially for routine screening, and especially for young patients as well,” Roques-Carmes says.

    “We believe that this will open a new field of research in nanophotonics,” he adds. “You can use a lot of the existing work and research that has been done in the field of nanophotonics to improve significantly on existing materials that scintillate.”

    “The research presented in this paper is hugely significant,” says Rajiv Gupta, chief of neuroradiology at Massachusetts General Hospital and an associate professor at Harvard Medical School, who was not associated with this work. “Nearly all detectors used in the $100 billion [medical X-ray] industry are indirect detectors,” which is the type of detector the new findings apply to, he says. “Everything that I use in my clinical practice today is based on this principle. This paper improves the efficiency of this process by 10 times. If this claim is even partially true, say the improvement is two times instead of 10 times, it would be transformative for the field!”

    Soljacic says that while their experiments proved a tenfold improvement in emission could be achieved in particular systems, by further fine-tuning the design of the nanoscale patterning, “we also show that you can get up to 100 times [improvement] in certain scintillator systems, and we believe we also have a path toward making it even better,” he says.

    Soljacic points out that in other areas of nanophotonics, a field that deals with how light interacts with materials that are structured at the nanometer scale, the development of computational simulations has enabled rapid, substantial improvements, for example in the development of solar cells and LEDs. The new models this team developed for scintillating materials could facilitate similar leaps in this technology, he says.

    Nanophotonics techniques “give you the ultimate power of tailoring and enhancing the behavior of light,” Soljacic says. “But until now, this promise, this ability to do this with scintillation was unreachable because modeling the scintillation was very challenging. Now, this work for the first time opens up this field of scintillation, fully opens it, for the application of nanophotonics techniques.” More generally, the team believes that the combination of nanophotonic and scintillators might ultimately enable higher resolution, reduced X-ray dose, and energy-resolved X-ray imaging.

    This work is “very original and excellent,” says Eli Yablonovitch, a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, who was not associated with this research. “New scintillator concepts are very important in medical imaging and in basic research.”

    Yablonovitch adds that while the concept still needs to be proven in a practical device, he says that, “After years of research on photonic crystals in optical communication and other fields, it’s long overdue that photonic crystals should be applied to scintillators, which are of great practical importance yet have been overlooked” until this work.

    The research team included Ali Ghorashi, Steven Kooi, Yi Yang, Zin Lin, Justin Beroz, Aviram Massuda, Jamison Sloan, and Nicolas Romeo at MIT; Yang Yu at Raith America, Inc.; and Ido Kaminer at Technion in Israel. The work was supported, in part, by the U.S. Army Research Office and the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, by the Air Force Office of Scientific Research, and by a Mathworks Engineering Fellowship. More

  • in

    Solar-powered system offers a route to inexpensive desalination

    An estimated two-thirds of humanity is affected by shortages of water, and many such areas in the developing world also face a lack of dependable electricity. Widespread research efforts have thus focused on ways to desalinate seawater or brackish water using just solar heat. Many such efforts have run into problems with fouling of equipment caused by salt buildup, however, which often adds complexity and expense.

    Now, a team of researchers at MIT and in China has come up with a solution to the problem of salt accumulation — and in the process developed a desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could also be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

    The findings are described today in the journal Nature Communications, in a paper by MIT graduate student Lenan Zhang, postdoc Xiangyu Li, professor of mechanical engineering Evelyn Wang, and four others.

    “There have been a lot of demonstrations of really high-performing, salt-rejecting, solar-based evaporation designs of various devices,” Wang says. “The challenge has been the salt fouling issue, that people haven’t really addressed. So, we see these very attractive performance numbers, but they’re often limited because of longevity. Over time, things will foul.”

    Many attempts at solar desalination systems rely on some kind of wick to draw the saline water through the device, but these wicks are vulnerable to salt accumulation and relatively difficult to clean. The team focused on developing a wick-free system instead. The result is a layered system, with dark material at the top to absorb the sun’s heat, then a thin layer of water above a perforated layer of material, sitting atop a deep reservoir of the salty water such as a tank or a pond. After careful calculations and experiments, the researchers determined the optimal size for the holes drilled through the perforated material, which in their tests was made of polyurethane. At 2.5 millimeters across, these holes can be easily made using commonly available waterjets.

    The holes are large enough to allow for a natural convective circulation between the warmer upper layer of water and the colder reservoir below. That circulation naturally draws the salt from the thin layer above down into the much larger body of water below, where it becomes well-diluted and no longer a problem. “It allows us to achieve high performance and yet also prevent this salt accumulation,” says Wang, who is the Ford Professor of Engineering and head of the Department of Mechanical Engineering.

    Li says that the advantages of this system are “both the high performance and the reliable operation, especially under extreme conditions, where we can actually work with near-saturation saline water. And that means it’s also very useful for wastewater treatment.”

    He adds that much work on such solar-powered desalination has focused on novel materials. “But in our case, we use really low-cost, almost household materials.” The key was analyzing and understanding the convective flow that drives this entirely passive system, he says. “People say you always need new materials, expensive ones, or complicated structures or wicking structures to do that. And this is, I believe, the first one that does this without wicking structures.”

    This new approach “provides a promising and efficient path for desalination of high salinity solutions, and could be a game changer in solar water desalination,” says Hadi Ghasemi, a professor of chemical and biomolecular engineering at the University of Houston, who was not associated with this work. “Further work is required for assessment of this concept in large settings and in long runs,” he adds.

    Just as hot air rises and cold air falls, Zhang explains, natural convection drives the desalination process in this device. In the confined water layer near the top, “the evaporation happens at the very top interface. Because of the salt, the density of water at the very top interface is higher, and the bottom water has lower density. So, this is an original driving force for this natural convection because the higher density at the top drives the salty liquid to go down.” The water evaporated from the top of the system can then be collected on a condensing surface, providing pure fresh water.

    The rejection of salt to the water below could also cause heat to be lost in the process, so preventing that required careful engineering, including making the perforated layer out of highly insulating material to keep the heat concentrated above. The solar heating at the top is accomplished through a simple layer of black paint.

    This gif shows fluid flow visualized by food dye. The left-side shows the slow transport of colored de-ionized water from the top to the bottom bulk water. The right-side shows the fast transport of colored saline water from the top to the bottom bulk water driven by the natural convection effect.

    So far, the team has proven the concept using small benchtop devices, so the next step will be starting to scale up to devices that could have practical applications. Based on their calculations, a system with just 1 square meter (about a square yard) of collecting area should be sufficient to provide a family’s daily needs for drinking water, they say. Zhang says they calculated that the necessary materials for a 1-square-meter device would cost only about $4.

    Their test apparatus operated for a week with no signs of any salt accumulation, Li says. And the device is remarkably stable. “Even if we apply some extreme perturbation, like waves on the seawater or the lake,” where such a device could be installed as a floating platform, “it can return to its original equilibrium position very fast,” he says.

    The necessary work to translate this lab-scale proof of concept into workable commercial devices, and to improve the overall water production rate, should be possible within a few years, Zhang says. The first applications are likely to be providing safe water in remote off-grid locations, or for disaster relief after hurricanes, earthquakes, or other disruptions of normal water supplies.

    Zhang adds that “if we can concentrate the sunlight a little bit, we could use this passive device to generate high-temperature steam to do medical sterilization” for off-grid rural areas.

    “I think a real opportunity is the developing world,” Wang says. “I think that is where there’s most probable impact near-term, because of the simplicity of the design.” But, she adds, “if we really want to get it out there, we also need to work with the end users, to really be able to adopt the way we design it so that they’re willing to use it.”

    “This is a new strategy toward solving the salt accumulation problem in solar evaporation,” says Peng Wang, a professor at King Abdullah University of Science and Technology in Saudi Arabia, who was not associated with this research. “This elegant design will inspire new innovations in the design of advanced solar evaporators. The strategy is very promising due to its high energy efficiency, operation durability, and low cost, which contributes to low-cost and passive water desalination to produce fresh water from various source water with high salinity, e.g., seawater, brine, or brackish groundwater.”

    The team also included Yang Zhong, Arny Leroy, and Lin Zhao at MIT, and Zhenyuan Xu at Shanghai Jiao Tong University in China. The work was supported by the Singapore-MIT Alliance for Research and Technology, the U.S.-Egypt Science and Technology Joint Fund, and used facilities supported by the National Science Foundation. More

  • in

    Overcoming a bottleneck in carbon dioxide conversion

    If researchers could find a way to chemically convert carbon dioxide into fuels or other products, they might make a major dent in greenhouse gas emissions. But many such processes that have seemed promising in the lab haven’t performed as expected in scaled-up formats that would be suitable for use with a power plant or other emissions sources.

    Now, researchers at MIT have identified, quantified, and modeled a major reason for poor performance in such conversion systems. The culprit turns out to be a local depletion of the carbon dioxide gas right next to the electrodes being used to catalyze the conversion. The problem can be alleviated, the team found, by simply pulsing the current off and on at specific intervals, allowing time for the gas to build back up to the needed levels next to the electrode.

    The findings, which could spur progress on developing a variety of materials and designs for electrochemical carbon dioxide conversion systems, were published today in the journal Langmuir, in a paper by MIT postdoc Álvaro Moreno Soto, graduate student Jack Lake, and professor of mechanical engineering Kripa Varanasi.

    “Carbon dioxide mitigation is, I think, one of the important challenges of our time,” Varanasi says. While much of the research in the area has focused on carbon capture and sequestration, in which the gas is pumped into some kind of deep underground reservoir or converted to an inert solid such as limestone, another promising avenue has been converting the gas into other carbon compounds such as methane or ethanol, to be used as fuel, or ethylene, which serves as a precursor to useful polymers.

    There are several ways to do such conversions, including electrochemical, thermocatalytic, photothermal, or photochemical processes. “Each of these has problems or challenges,” Varanasi says. The thermal processes require very high temperature, and they don’t produce very high-value chemical products, which is a challenge with the light-activated processes as well, he says. “Efficiency is always at play, always an issue.”

    The team has focused on the electrochemical approaches, with a goal of getting “higher-C products” — compounds that contain more carbon atoms and tend to be higher-value fuels because of their energy per weight or volume. In these reactions, the biggest challenge has been curbing competing reactions that can take place at the same time, especially the splitting of water molecules into oxygen and hydrogen.

    The reactions take place as a stream of liquid electrolyte with the carbon dioxide dissolved in it passes over a metal catalytic surface that is electrically charged. But as the carbon dioxide gets converted, it leaves behind a region in the electrolyte stream where it has essentially been used up, and so the reaction within this depleted zone turns toward water splitting instead. This unwanted reaction uses up energy and greatly reduces the overall efficiency of the conversion process, the researchers found.

    “There’s a number of groups working on this, and a number of catalysts that are out there,” Varanasi says. “In all of these, I think the hydrogen co-evolution becomes a bottleneck.”

    One way of counteracting this depletion, they found, can be achieved by a pulsed system — a cycle of simply turning off the voltage, stopping the reaction and giving the carbon dioxide time to spread back into the depleted zone and reach usable levels again, and then resuming the reaction.

    Often, the researchers say, groups have found promising catalyst materials but haven’t run their lab tests long enough to observe these depletion effects, and thus have been frustrated in trying to scale up their systems. Furthermore, the concentration of carbon dioxide next to the catalyst dictates the products that are made. Hence, depletion can also change the mix of products that are produced and can make the process unreliable. “If you want to be able to make a system that works at industrial scale, you need to be able to run things over a long period of time,” Varanasi says, “and you need to not have these kinds of effects that reduce the efficiency or reliability of the process.”

    The team studied three different catalyst materials, including copper, and “we really focused on making sure that we understood and can quantify the depletion effects,” Lake says. In the process they were able to develop a simple and reliable way of monitoring the efficiency of the conversion process as it happens, by measuring the changing pH levels, a measure of acidity, in the system’s electrolyte.

    In their tests, they used more sophisticated analytical tools to characterize reaction products, including gas chromatography for analysis of the gaseous products, and nuclear magnetic resonance characterization for the system’s liquid products. But their analysis showed that the simple pH measurement of the electrolyte next to the electrode during operation could provide a sufficient measure of the efficiency of the reaction as it progressed.

    This ability to easily monitor the reaction in real-time could ultimately lead to a system optimized by machine-learning methods, controlling the production rate of the desired compounds through continuous feedback, Moreno Soto says.

    Now that the process is understood and quantified, other approaches to mitigating the carbon dioxide depletion might be developed, the researchers say, and could easily be tested using their methods.

    This work shows, Lake says, that “no matter what your catalyst material is” in such an electrocatalytic system, “you’ll be affected by this problem.” And now, by using the model they developed, it’s possible to determine exactly what kind of time window needs to be evaluated to get an accurate sense of the material’s overall efficiency and what kind of system operations could maximize its effectiveness.

    The research was supported by Shell, through the MIT Energy Initiative. More

  • in

    Nanograins make for a seismic shift

    In Earth’s crust, tectonic blocks slide and grind past each other like enormous ships loosed from anchor. Earthquakes are generated along these fault zones when enough stress builds for a block to stick, then suddenly slip.

    These slips can be aided by several factors that reduce friction within a fault zone, such as hotter temperatures or pressurized gases that can separate blocks like pucks on an air-hockey table. The decreasing friction enables one tectonic block to accelerate against the other until it runs out of energy. Seismologists have long believed this kind of frictional instability can explain how all crustal earthquakes start. But that might not be the whole story.

    In a study published today in Nature Communications, scientists Hongyu Sun and Matej Pec, from MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), find that ultra-fine-grained crystals within fault zones can behave like low-viscosity fluids. The finding offers an alternative explanation for the instability that leads to crustal earthquakes. It also suggests a link between quakes in the crust and other types of temblors that occur deep in the Earth.

    Nanograins are commonly found in rocks from seismic environments along the smooth surface of “fault mirrors.” These polished, reflective rock faces betray the slipping, sliding forces of past earthquakes. However, it was unclear whether the crystals caused quakes or were merely formed by them.

    To better characterize how these crystals behaved within a fault, the researchers used a planetary ball milling machine to pulverize granite rocks into particles resembling those found in nature. Like a super-powered washing machine filled with ceramic balls, the machine pounded the rock until all its crystals were about 100 nanometers in width, each grain 1/2,000 the size of an average grain of sand.

    After packing the nanopowder into postage-stamp sized cylinders jacketed in gold, the researchers then subjected the material to stresses and heat, creating laboratory miniatures of real fault zones. This process enabled them to isolate the effect of the crystals from the complexity of other factors involved in an actual earthquake.

    The researchers report that the crystals were extremely weak when shearing was initiated — an order of magnitude weaker than more common microcrystals. But the nanocrystals became significantly stronger when the deformation rate was accelerated. Pec, professor of geophysics and the Victor P. Starr Career Development Chair, compares this characteristic, called “rate-strengthening,” to stirring honey in a jar. Stirring the honey slowly is easy, but becomes more difficult the faster you stir.

    The experiment suggests something similar happens in fault zones. As tectonic blocks accelerate past each other, the crystals gum things up between them like honey stirred in a seismic pot.

    Sun, the study’s lead author and EAPS graduate student, explains that their finding runs counter to the dominant frictional weakening theory of how earthquakes start. That theory would predict surfaces of a fault zone have material that gets weaker as the fault block accelerates, and friction should be decreasing. The nanocrystals did just the opposite. However, the crystals’ intrinsic weakness could mean that when enough of them accumulate within a fault, they can give way, causing an earthquake.

    “We don’t totally disagree with the old theorem, but our study really opens new doors to explain the mechanisms of how earthquakes happen in the crust,” Sun says.

    The finding also suggests a previously unrecognized link between earthquakes in the crust and the earthquakes that rumble hundreds of kilometers beneath the surface, where the same tectonic dynamics aren’t at play. That deep, there are no tectonic blocks to grind against each other, and even if there were, the immense pressure would prevent the type of quakes observed in the crust that necessitate some dilatancy and void creation.

    “We know that earthquakes happen all the way down to really big depths where this motion along a frictional fault is basically impossible,” says Pec. “And so clearly, there must be different processes that allow for these earthquakes to happen.”

    Possible mechanisms for these deep-Earth tremors include “phase transitions” which occur due to atomic re-arrangement in minerals and are accompanied by a volume change, and other kinds of metamorphic reactions, such as dehydration of water-bearing minerals, in which the released fluid is pumped through pores and destabilizes a fault. These mechanisms are all characterized by a weak, rate-strengthening layer.

    If weak, rate-strengthening nanocrystals are abundant in the deep Earth, they could present another possible mechanism, says Pec. “Maybe crustal earthquakes are not a completely different beast than the deeper earthquakes. Maybe they have something in common.” More

  • in

    Researchers design sensors to rapidly detect plant hormones

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their local collaborators from Temasek Life Sciences Laboratory (TLL) and Nanyang Technological University (NTU), have developed the first-ever nanosensor to enable rapid testing of synthetic auxin plant hormones. The novel nanosensors are safer and less tedious than existing techniques for testing plants’ response to compounds such as herbicide, and can be transformative in improving agricultural production and our understanding of plant growth.

    The scientists designed sensors for two plant hormones — 1-naphthalene acetic acid (NAA) and 2,4-dichlorophenoxyacetic acid (2,4-D) — which are used extensively in the farming industry for regulating plant growth and as herbicides, respectively. Current methods to detect NAA and 2,4-D cause damage to plants, and are unable to provide real-time in vivo monitoring and information.

    Based on the concept of corona phase molecular recognition (​​CoPhMoRe) pioneered by the Strano Lab at SMART DiSTAP and MIT, the new sensors are able to detect the presence of NAA and 2,4-D in living plants at a swift pace, providing plant information in real-time, without causing any harm. The team has successfully tested both sensors on a number of everyday crops including pak choi, spinach, and rice across various planting mediums such as soil, hydroponic, and plant tissue culture.

    Explained in a paper titled “Nanosensor Detection of Synthetic Auxins In Planta using Corona Phase Molecular Recognition” published in the journal ACS Sensors, the research can facilitate more efficient use of synthetic auxins in agriculture and hold tremendous potential to advance plant biology study.

    “Our CoPhMoRe technique has previously been used to detect compounds such as hydrogen peroxide and heavy-metal pollutants like arsenic — but this is the first successful case of CoPhMoRe sensors developed for detecting plant phytohormones that regulate plant growth and physiology, such as sprays to prevent premature flowering and dropping of fruits,” says DiSTAP co-lead principal investigator Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “This technology can replace current state-of-the-art sensing methods which are laborious, destructive, and unsafe.”

    Of the two sensors developed by the research team, the 2,4-D nanosensor also showed the ability to detect herbicide susceptibility, enabling farmers and agricultural scientists to quickly find out how vulnerable or resistant different plants are to herbicides without the need to monitor crop or weed growth over days. “This could be incredibly beneficial in revealing the mechanism behind how 2,4-D works within plants and why crops develop herbicide resistance,” says DiSTAP and TLL Principal Investigator Rajani Sarojam.

    “Our research can help the industry gain a better understanding of plant growth dynamics and has the potential to completely change how the industry screens for herbicide resistance, eliminating the need to monitor crop or weed growth over days,” says Mervin Chun-Yi Ang, a research scientist at DiSTAP. “It can be applied across a variety of plant species and planting mediums, and could easily be used in commercial setups for rapid herbicide susceptibility testing, such as urban farms.”

    NTU Professor Mary Chan-Park Bee Eng says, “Using nanosensors for in planta detection eliminates the need for extensive extraction and purification processes, which saves time and money. They also use very low-cost electronics, which makes them easily adaptable for commercial setups.”

    The team says their research can lead to future development of real-time nanosensors for other dynamic plant hormones and metabolites in living plants as well.

    The development of the nanosensor, optical detection system, and image processing algorithms for this study was done by SMART, NTU, and MIT, while TLL validated the nanosensors and provided knowledge of plant biology and plant signaling mechanisms. The research is carried out by SMART and supported by NRF under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    DiSTAP is one of the five interdisciplinary research roups in SMART. The DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biosynthetic technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients.

    Scientists from MIT, TTL, NTU, and National University of Singapore (NUS) are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, drought, and pathogen resistance and biosynthesis of high-value commercial products; developing tools for producing hydrophobic food components in industry-relevant microbes; developing novel microbial and enzymatic technologies to produce volatile organic compounds that can protect and/or promote growth of leafy vegetables; and applying these technologies to improve urban farming.

    DiSTAP is led by Michael Strano and Singapore co-lead principal investigator Professor Chua Nam Hai.

    SMART was established by MIT, in partnership with the NRF, in 2007. SMART, the first entity in CREATE, serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both. SMART currently comprises an Innovation Center and five interdisciplinary research groups: Antimicrobial Resistance (AMR), Critical Analytics for Manufacturing Personalized-Medicine (CAMP), DiSTAP, Future Urban Mobility (FM), and Low Energy Electronic Systems (LEES). SMART is funded by the NRF. More

  • in

    Why boiling droplets can race across hot oily surfaces

    When you’re frying something in a skillet and some droplets of water fall into the pan, you may have noticed those droplets skittering around on top of the film of hot oil. Now, that seemingly trivial phenomenon has been analyzed and understood for the first time by researchers at MIT — and may have important implications for microfluidic devices, heat transfer systems, and other useful functions.

    A droplet of boiling water on a hot surface will sometimes levitate on a thin vapor film, a well-studied phenomenon called the Leidenfrost effect. Because it is suspended on a cushion of vapor, the droplet can move across the surface with little friction. If the surface is coated with hot oil, which has much greater friction than the vapor film under a Leidenfrost droplet, the hot droplet should be expected to move much more slowly. But, counterintuitively, the series of experiments at MIT has showed that the opposite effect happens: The droplet on oil zooms away much more rapidly than on bare metal.

    This effect, which propels droplets across a heated oily surface 10 to 100 times faster than on bare metal, could potentially be used for self-cleaning or de-icing systems, or to propel tiny amounts of liquid through the tiny tubing of microfluidic devices used for biomedical and chemical research and testing. The findings are described today in a paper in the journal Physical Review Letters, written by graduate student Victor Julio Leon and professor of mechanical engineering Kripa Varanasi.

    In previous research, Varanasi and his team showed that it would be possible to harness this phenomenon for some of these potential applications, but the new work, producing such high velocities (approximately 50 times faster), could open up even more new uses, Varanasi says.

    After long and painstaking analysis, Leon and Varanasi were able to determine the reason for the rapid ejection of these droplets from the hot surface. Under the right conditions of high temperature, oil viscosity, and oil thickness, the oil will form a kind of thin cloak coating the outside of each water droplet. As the droplet heats up, tiny bubbles of vapor form along the interface between the droplet and the oil. Because these minuscule bubbles accumulate randomly along the droplet’s base, asymmetries develop, and the lowered friction under the bubble loosens the droplet’s attachment to the surface and propels it away.

    The oily film acts almost like the rubber of a balloon, and when the tiny vapor bubbles burst through, they impart a force and “the balloon just flies off because the air is going out one side, creating a momentum transfer,” Varanasi says. Without the oil cloak, the vapor bubbles would just flow out of the droplet in all directions, preventing self-propulsion, but the cloaking effect holds them in like the skin of the balloon.

    Researchers used extreme high-speed photography to reveal the details of the moving droplets. “You can actually see the fluctuations on the surface,” graduate student Victor Leon says.

    The phenomenon sounds simple, but it turns out to depend on a complex interplay between events happening at different timescales.

    This newly analyzed self-ejection phenomenon depends on a number of factors, including the droplet size, the thickness and viscosity of the oil film, the thermal conductivity of the surface, the surface tension of the different liquids in the system, the type of oil, and the texture of the surface.

    In their experiments, the lowest viscosity of the several oils they tested was about 100 times more viscous than the surrounding air. So, it would have been expected to make bubbles move much more slowly than on the air cushion of the Leidenfrost effect. “That gives an idea of how surprising it is that this droplet is moving faster,” Leon says.

    As boiling starts, bubbles will randomly form from some nucleation site that is not right at its center. Bubble formation will increase on that side, leading to the propulsion off in one direction. So far, the researchers have not been able to control the direction of that randomly induced propulsion, but they are now working on some possible ways to control the directionality in the future. “We have ideas of how to trigger the propulsion in controlled directions,” Leon says.

    Remarkably, the tests showed that even though the oil film of the surface, which was a silicon wafer, was only 10 to 100 microns thick — about the thickness of a human hair — its behavior didn’t match the equations for a thin film. Instead, because of the vaporization the film, it was actually behaving like an infinitely deep pool of oil. “We were kind of astounded” by that finding, Leon says. While a thin film should have caused it to stick, the virtually infinite pool gave the droplet much lower friction, allowing it to move more rapidly than expected, Leon says.

    The effect depends on the fact that the formation of the tiny bubbles is a much more rapid process than the transfer of heat through the oil film, about a thousand times faster, leaving plenty of time for the asymmetries within the droplet to accumulate. When the bubbles of vapor initially form at the oil-water interface, they are  much more insulating that the liquid of the droplet, leading to significant thermal disturbances in the oil film. These disturbances cause the droplet to vibrate, reducing friction and increasing vaporization rate.

    It took extreme high-speed photography to reveal the details of this rapid effect, Leon says, using a 100,000 frames per second video camera. “You can actually see the fluctuations on the surface,” Leon says.

    Initially, Varanasi says, “we were stumped at multiple levels as to what was going on, because the effect was so unexpected. … It’s a fairly complex answer to what may look seemingly simple, but it really creates this fast propulsion.”

    In practice, the effect means that in certain situations, a simple heating of a surface, by the right amount and with the right kind of oily coating, could cause corrosive scaling drops to be cleared from a surface. Further down the line, once the researchers have more control over directionality, the system could potentially substitute for some high-tech pumps in microfluidic devices to propel droplets through the right tubes at the right time. This might be especially useful in microgravity situations, where ordinary pumps don’t function as usual.

    It may also be possible to attach a payload to the droplets, creating a kind of microscale robotic delivery system, Varanasi says. And while their tests focused on water droplets, potentially it could apply to many different kinds of liquids and sublimating solids, he says.

    The work was supported by the National Science Foundation. More