More stories

  • in

    Chemical reactions for the energy transition

    One challenge in decarbonizing the energy system is knowing how to deal with new types of fuels. Traditional fuels such as natural gas and oil can be combined with other materials and then heated to high temperatures so they chemically react to produce other useful fuels or substances, or even energy to do work. But new materials such as biofuels can’t take as much heat without breaking down.

    A key ingredient in such chemical reactions is a specially designed solid catalyst that is added to encourage the reaction to happen but isn’t itself consumed in the process. With traditional materials, the solid catalyst typically interacts with a gas; but with fuels derived from biomass, for example, the catalyst must work with a liquid — a special challenge for those who design catalysts.

    For nearly a decade, Yogesh Surendranath, an associate professor of chemistry at MIT, has been focusing on chemical reactions between solid catalysts and liquids, but in a different situation: rather than using heat to drive reactions, he and his team input electricity from a battery or a renewable source such as wind or solar to give chemically inactive molecules more energy so they react. And key to their research is designing and fabricating solid catalysts that work well for reactions involving liquids.

    Recognizing the need to use biomass to develop sustainable liquid fuels, Surendranath wondered whether he and his team could take the principles they have learned about designing catalysts to drive liquid-solid reactions with electricity and apply them to reactions that occur at liquid-solid interfaces without any input of electricity.

    To their surprise, they found that their knowledge is directly relevant. Why? “What we found — amazingly — is that even when you don’t hook up wires to your catalyst, there are tiny internal ‘wires’ that do the reaction,” says Surendranath. “So, reactions that people generally think operate without any flow of current actually do involve electrons shuttling from one place to another.” And that means that Surendranath and his team can bring the powerful techniques of electrochemistry to bear on the problem of designing catalysts for sustainable fuels.

    A novel hypothesis

    Their work has focused on a class of chemical reactions important in the energy transition that involve adding oxygen to small organic (carbon-containing) molecules such as ethanol, methanol, and formic acid. The conventional assumption is that the reactant and oxygen chemically react to form the product plus water. And a solid catalyst — often a combination of metals — is present to provide sites on which the reactant and oxygen can interact.

    But Surendranath proposed a different view of what’s going on. In the usual setup, two catalysts, each one composed of many nanoparticles, are mounted on a conductive carbon substrate and submerged in water. In that arrangement, negatively charged electrons can flow easily through the carbon, while positively charged protons can flow easily through water.

    Surendranath’s hypothesis was that the conversion of reactant to product progresses by means of two separate “half-reactions” on the two catalysts. On one catalyst, the reactant turns into a product, in the process sending electrons into the carbon substrate and protons into the water. Those electrons and protons are picked up by the other catalyst, where they drive the oxygen-to-water conversion. So, instead of a single reaction, two separate but coordinated half-reactions together achieve the net conversion of reactant to product.

    As a result, the overall reaction doesn’t actually involve any net electron production or consumption. It is a standard “thermal” reaction resulting from the energy in the molecules and maybe some added heat. The conventional approach to designing a catalyst for such a reaction would focus on increasing the rate of that reactant-to-product conversion. And the best catalyst for that kind of reaction could turn out to be, say, gold or palladium or some other expensive precious metal.

    However, if that reaction actually involves two half-reactions, as Surendranath proposed, there is a flow of electrical charge (the electrons and protons) between them. So Surendranath and others in the field could instead use techniques of electrochemistry to design not a single catalyst for the overall reaction but rather two separate catalysts — one to speed up one half-reaction and one to speed up the other half-reaction. “That means we don’t have to design one catalyst to do all the heavy lifting of speeding up the entire reaction,” says Surendranath. “We might be able to pair up two low-cost, earth-abundant catalysts, each of which does half of the reaction well, and together they carry out the overall transformation quickly and efficiently.”

    But there’s one more consideration: Electrons can flow through the entire catalyst composite, which encompasses the catalyst particle(s) and the carbon substrate. For the chemical conversion to happen as quickly as possible, the rate at which electrons are put into the catalyst composite must exactly match the rate at which they are taken out. Focusing on just the electrons, if the reaction-to-product conversion on the first catalyst sends the same number of electrons per second into the “bath of electrons” in the catalyst composite as the oxygen-to-water conversion on the second catalyst takes out, the two half-reactions will be balanced, and the electron flow — and the rate of the combined reaction — will be fast. The trick is to find good catalysts for each of the half-reactions that are perfectly matched in terms of electrons in and electrons out.

    “A good catalyst or pair of catalysts can maintain an electrical potential — essentially a voltage — at which both half-reactions are fast and are balanced,” says Jaeyune Ryu PhD ’21, a former member of the Surendranath lab and lead author of the study; Ryu is now a postdoc at Harvard University. “The rates of the reactions are equal, and the voltage in the catalyst composite won’t change during the overall thermal reaction.”

    Drawing on electrochemistry

    Based on their new understanding, Surendranath, Ryu, and their colleagues turned to electrochemistry techniques to identify a good catalyst for each half-reaction that would also pair up to work well together. Their analytical framework for guiding catalyst development for systems that combine two half-reactions is based on a theory that has been used to understand corrosion for almost 100 years, but has rarely been applied to understand or design catalysts for reactions involving small molecules important for the energy transition.

    Key to their work is a potentiostat, a type of voltmeter that can either passively measure the voltage of a system or actively change the voltage to cause a reaction to occur. In their experiments, Surendranath and his team use the potentiostat to measure the voltage of the catalyst in real time, monitoring how it changes millisecond to millisecond. They then correlate those voltage measurements with simultaneous but separate measurements of the overall rate of catalysis to understand the reaction pathway.

    For their study of the conversion of small, energy-related molecules, they first tested a series of catalysts to find good ones for each half-reaction — one to convert the reactant to product, producing electrons and protons, and another to convert the oxygen to water, consuming electrons and protons. In each case, a promising candidate would yield a rapid reaction — that is, a fast flow of electrons and protons out or in.

    To help identify an effective catalyst for performing the first half-reaction, the researchers used their potentiostat to input carefully controlled voltages and measured the resulting current that flowed through the catalyst. A good catalyst will generate lots of current for little applied voltage; a poor catalyst will require high applied voltage to get the same amount of current. The team then followed the same procedure to identify a good catalyst for the second half-reaction.

    To expedite the overall reaction, the researchers needed to find two catalysts that matched well — where the amount of current at a given applied voltage was high for each of them, ensuring that as one produced a rapid flow of electrons and protons, the other one consumed them at the same rate.

    To test promising pairs, the researchers used the potentiostat to measure the voltage of the catalyst composite during net catalysis — not changing the voltage as before, but now just measuring it from tiny samples. In each test, the voltage will naturally settle at a certain level, and the goal is for that to happen when the rate of both reactions is high.

    Validating their hypothesis and looking ahead

    By testing the two half-reactions, the researchers could measure how the reaction rate for each one varied with changes in the applied voltage. From those measurements, they could predict the voltage at which the full reaction would proceed fastest. Measurements of the full reaction matched their predictions, supporting their hypothesis.

    The team’s novel approach of using electrochemistry techniques to examine reactions thought to be strictly thermal in nature provides new insights into the detailed steps by which those reactions occur and therefore into how to design catalysts to speed them up. “We can now use a divide-and-conquer strategy,” says Ryu. “We know that the net thermal reaction in our study happens through two ‘hidden’ but coupled half-reactions, so we can aim to optimize one half-reaction at a time” — possibly using low-cost catalyst materials for one or both.

    Adds Surendranath, “One of the things that we’re excited about in this study is that the result is not final in and of itself. It has really seeded a brand-new thrust area in our research program, including new ways to design catalysts for the production and transformation of renewable fuels and chemicals.”

    This research was supported primarily by the Air Force Office of Scientific Research. Jaeyune Ryu PhD ’21 was supported by a Samsung Scholarship. Additional support was provided by a National Science Foundation Graduate Research Fellowship.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    New program bolsters innovation in next-generation artificial intelligence hardware

    The MIT AI Hardware Program is a new academia and industry collaboration aimed at defining and developing translational technologies in hardware and software for the AI and quantum age. A collaboration between the MIT School of Engineering and MIT Schwarzman College of Computing, involving the Microsystems Technologies Laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that will deliver enhanced energy efficiency systems for cloud and edge computing.

    “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

    Based on use-inspired research involving materials, devices, circuits, algorithms, and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technological solutions. The program spans materials and devices, as well as architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

    “As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

    The inaugural members of the program are companies from a wide range of industries including chip-making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem, both nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

    The five inaugural members of the MIT AI Hardware Program are:  

    Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV, and Astro; 
    Analog Devices, a global leader in the design and manufacturing of analog, mixed signal, and DSP integrated circuits; 
    ASML, an innovation leader in the semiconductor industry, providing chipmakers with hardware, software, and services to mass produce patterns on silicon through lithography; 
    NTT Research, a subsidiary of NTT that conducts fundamental research to upgrade reality in game-changing ways that improve lives and brighten our global future; and 
    TSMC, the world’s leading dedicated semiconductor foundry.

    The MIT AI Hardware Program will create a roadmap of transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.  

    “We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” says Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science. “Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

    The program will prioritize the following topics:

    analog neural networks;
    new roadmap CMOS designs;
    heterogeneous integration for AI systems;
    onolithic-3D AI systems;
    analog nonvolatile memory devices;
    software-hardware co-design;
    intelligence at the edge;
    intelligent sensors;
    energy-efficient AI;
    intelligent internet of things (IIoT);
    neuromorphic computing;
    AI edge security;
    quantum AI;
    wireless technologies;
    hybrid-cloud computing; and
    high-performance computation.

    “We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions — solutions that we are proud to give to the world and generations to come,” says Aude Oliva, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement in the MIT Schwarzman College of Computing.

    The new program is co-led by Jesús del Alamo and Aude Oliva, and Anantha Chandrakasan serves as chair. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    More sensitive X-ray imaging

    Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.

    Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.

    Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.

    The findings are described today in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.

    While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.

    To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter).

    “The key to what we’re doing is a general theory and framework we have developed,” Rivera says. This allows the researchers to calculate the scintillation levels that would be produced by any arbitrary configuration of nanophotonic structures. The scintillation process itself involves a series of steps, making it complicated to unravel. The framework the team developed involves integrating three different types of physics, Roques-Carmes says. Using this system they have found a good match between their predictions and the results of their subsequent experiments.

    The experiments showed a tenfold improvement in emission from the treated scintillator. “So, this is something that might translate into applications for medical imaging, which are optical photon-starved, meaning the conversion of X-rays to optical light limits the image quality. [In medical imaging,] you do not want to irradiate your patients with too much of the X-rays, especially for routine screening, and especially for young patients as well,” Roques-Carmes says.

    “We believe that this will open a new field of research in nanophotonics,” he adds. “You can use a lot of the existing work and research that has been done in the field of nanophotonics to improve significantly on existing materials that scintillate.”

    “The research presented in this paper is hugely significant,” says Rajiv Gupta, chief of neuroradiology at Massachusetts General Hospital and an associate professor at Harvard Medical School, who was not associated with this work. “Nearly all detectors used in the $100 billion [medical X-ray] industry are indirect detectors,” which is the type of detector the new findings apply to, he says. “Everything that I use in my clinical practice today is based on this principle. This paper improves the efficiency of this process by 10 times. If this claim is even partially true, say the improvement is two times instead of 10 times, it would be transformative for the field!”

    Soljacic says that while their experiments proved a tenfold improvement in emission could be achieved in particular systems, by further fine-tuning the design of the nanoscale patterning, “we also show that you can get up to 100 times [improvement] in certain scintillator systems, and we believe we also have a path toward making it even better,” he says.

    Soljacic points out that in other areas of nanophotonics, a field that deals with how light interacts with materials that are structured at the nanometer scale, the development of computational simulations has enabled rapid, substantial improvements, for example in the development of solar cells and LEDs. The new models this team developed for scintillating materials could facilitate similar leaps in this technology, he says.

    Nanophotonics techniques “give you the ultimate power of tailoring and enhancing the behavior of light,” Soljacic says. “But until now, this promise, this ability to do this with scintillation was unreachable because modeling the scintillation was very challenging. Now, this work for the first time opens up this field of scintillation, fully opens it, for the application of nanophotonics techniques.” More generally, the team believes that the combination of nanophotonic and scintillators might ultimately enable higher resolution, reduced X-ray dose, and energy-resolved X-ray imaging.

    This work is “very original and excellent,” says Eli Yablonovitch, a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, who was not associated with this research. “New scintillator concepts are very important in medical imaging and in basic research.”

    Yablonovitch adds that while the concept still needs to be proven in a practical device, he says that, “After years of research on photonic crystals in optical communication and other fields, it’s long overdue that photonic crystals should be applied to scintillators, which are of great practical importance yet have been overlooked” until this work.

    The research team included Ali Ghorashi, Steven Kooi, Yi Yang, Zin Lin, Justin Beroz, Aviram Massuda, Jamison Sloan, and Nicolas Romeo at MIT; Yang Yu at Raith America, Inc.; and Ido Kaminer at Technion in Israel. The work was supported, in part, by the U.S. Army Research Office and the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, by the Air Force Office of Scientific Research, and by a Mathworks Engineering Fellowship. More

  • in

    Solar-powered system offers a route to inexpensive desalination

    An estimated two-thirds of humanity is affected by shortages of water, and many such areas in the developing world also face a lack of dependable electricity. Widespread research efforts have thus focused on ways to desalinate seawater or brackish water using just solar heat. Many such efforts have run into problems with fouling of equipment caused by salt buildup, however, which often adds complexity and expense.

    Now, a team of researchers at MIT and in China has come up with a solution to the problem of salt accumulation — and in the process developed a desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could also be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

    The findings are described today in the journal Nature Communications, in a paper by MIT graduate student Lenan Zhang, postdoc Xiangyu Li, professor of mechanical engineering Evelyn Wang, and four others.

    “There have been a lot of demonstrations of really high-performing, salt-rejecting, solar-based evaporation designs of various devices,” Wang says. “The challenge has been the salt fouling issue, that people haven’t really addressed. So, we see these very attractive performance numbers, but they’re often limited because of longevity. Over time, things will foul.”

    Many attempts at solar desalination systems rely on some kind of wick to draw the saline water through the device, but these wicks are vulnerable to salt accumulation and relatively difficult to clean. The team focused on developing a wick-free system instead. The result is a layered system, with dark material at the top to absorb the sun’s heat, then a thin layer of water above a perforated layer of material, sitting atop a deep reservoir of the salty water such as a tank or a pond. After careful calculations and experiments, the researchers determined the optimal size for the holes drilled through the perforated material, which in their tests was made of polyurethane. At 2.5 millimeters across, these holes can be easily made using commonly available waterjets.

    The holes are large enough to allow for a natural convective circulation between the warmer upper layer of water and the colder reservoir below. That circulation naturally draws the salt from the thin layer above down into the much larger body of water below, where it becomes well-diluted and no longer a problem. “It allows us to achieve high performance and yet also prevent this salt accumulation,” says Wang, who is the Ford Professor of Engineering and head of the Department of Mechanical Engineering.

    Li says that the advantages of this system are “both the high performance and the reliable operation, especially under extreme conditions, where we can actually work with near-saturation saline water. And that means it’s also very useful for wastewater treatment.”

    He adds that much work on such solar-powered desalination has focused on novel materials. “But in our case, we use really low-cost, almost household materials.” The key was analyzing and understanding the convective flow that drives this entirely passive system, he says. “People say you always need new materials, expensive ones, or complicated structures or wicking structures to do that. And this is, I believe, the first one that does this without wicking structures.”

    This new approach “provides a promising and efficient path for desalination of high salinity solutions, and could be a game changer in solar water desalination,” says Hadi Ghasemi, a professor of chemical and biomolecular engineering at the University of Houston, who was not associated with this work. “Further work is required for assessment of this concept in large settings and in long runs,” he adds.

    Just as hot air rises and cold air falls, Zhang explains, natural convection drives the desalination process in this device. In the confined water layer near the top, “the evaporation happens at the very top interface. Because of the salt, the density of water at the very top interface is higher, and the bottom water has lower density. So, this is an original driving force for this natural convection because the higher density at the top drives the salty liquid to go down.” The water evaporated from the top of the system can then be collected on a condensing surface, providing pure fresh water.

    The rejection of salt to the water below could also cause heat to be lost in the process, so preventing that required careful engineering, including making the perforated layer out of highly insulating material to keep the heat concentrated above. The solar heating at the top is accomplished through a simple layer of black paint.

    This gif shows fluid flow visualized by food dye. The left-side shows the slow transport of colored de-ionized water from the top to the bottom bulk water. The right-side shows the fast transport of colored saline water from the top to the bottom bulk water driven by the natural convection effect.

    So far, the team has proven the concept using small benchtop devices, so the next step will be starting to scale up to devices that could have practical applications. Based on their calculations, a system with just 1 square meter (about a square yard) of collecting area should be sufficient to provide a family’s daily needs for drinking water, they say. Zhang says they calculated that the necessary materials for a 1-square-meter device would cost only about $4.

    Their test apparatus operated for a week with no signs of any salt accumulation, Li says. And the device is remarkably stable. “Even if we apply some extreme perturbation, like waves on the seawater or the lake,” where such a device could be installed as a floating platform, “it can return to its original equilibrium position very fast,” he says.

    The necessary work to translate this lab-scale proof of concept into workable commercial devices, and to improve the overall water production rate, should be possible within a few years, Zhang says. The first applications are likely to be providing safe water in remote off-grid locations, or for disaster relief after hurricanes, earthquakes, or other disruptions of normal water supplies.

    Zhang adds that “if we can concentrate the sunlight a little bit, we could use this passive device to generate high-temperature steam to do medical sterilization” for off-grid rural areas.

    “I think a real opportunity is the developing world,” Wang says. “I think that is where there’s most probable impact near-term, because of the simplicity of the design.” But, she adds, “if we really want to get it out there, we also need to work with the end users, to really be able to adopt the way we design it so that they’re willing to use it.”

    “This is a new strategy toward solving the salt accumulation problem in solar evaporation,” says Peng Wang, a professor at King Abdullah University of Science and Technology in Saudi Arabia, who was not associated with this research. “This elegant design will inspire new innovations in the design of advanced solar evaporators. The strategy is very promising due to its high energy efficiency, operation durability, and low cost, which contributes to low-cost and passive water desalination to produce fresh water from various source water with high salinity, e.g., seawater, brine, or brackish groundwater.”

    The team also included Yang Zhong, Arny Leroy, and Lin Zhao at MIT, and Zhenyuan Xu at Shanghai Jiao Tong University in China. The work was supported by the Singapore-MIT Alliance for Research and Technology, the U.S.-Egypt Science and Technology Joint Fund, and used facilities supported by the National Science Foundation. More

  • in

    Overcoming a bottleneck in carbon dioxide conversion

    If researchers could find a way to chemically convert carbon dioxide into fuels or other products, they might make a major dent in greenhouse gas emissions. But many such processes that have seemed promising in the lab haven’t performed as expected in scaled-up formats that would be suitable for use with a power plant or other emissions sources.

    Now, researchers at MIT have identified, quantified, and modeled a major reason for poor performance in such conversion systems. The culprit turns out to be a local depletion of the carbon dioxide gas right next to the electrodes being used to catalyze the conversion. The problem can be alleviated, the team found, by simply pulsing the current off and on at specific intervals, allowing time for the gas to build back up to the needed levels next to the electrode.

    The findings, which could spur progress on developing a variety of materials and designs for electrochemical carbon dioxide conversion systems, were published today in the journal Langmuir, in a paper by MIT postdoc Álvaro Moreno Soto, graduate student Jack Lake, and professor of mechanical engineering Kripa Varanasi.

    “Carbon dioxide mitigation is, I think, one of the important challenges of our time,” Varanasi says. While much of the research in the area has focused on carbon capture and sequestration, in which the gas is pumped into some kind of deep underground reservoir or converted to an inert solid such as limestone, another promising avenue has been converting the gas into other carbon compounds such as methane or ethanol, to be used as fuel, or ethylene, which serves as a precursor to useful polymers.

    There are several ways to do such conversions, including electrochemical, thermocatalytic, photothermal, or photochemical processes. “Each of these has problems or challenges,” Varanasi says. The thermal processes require very high temperature, and they don’t produce very high-value chemical products, which is a challenge with the light-activated processes as well, he says. “Efficiency is always at play, always an issue.”

    The team has focused on the electrochemical approaches, with a goal of getting “higher-C products” — compounds that contain more carbon atoms and tend to be higher-value fuels because of their energy per weight or volume. In these reactions, the biggest challenge has been curbing competing reactions that can take place at the same time, especially the splitting of water molecules into oxygen and hydrogen.

    The reactions take place as a stream of liquid electrolyte with the carbon dioxide dissolved in it passes over a metal catalytic surface that is electrically charged. But as the carbon dioxide gets converted, it leaves behind a region in the electrolyte stream where it has essentially been used up, and so the reaction within this depleted zone turns toward water splitting instead. This unwanted reaction uses up energy and greatly reduces the overall efficiency of the conversion process, the researchers found.

    “There’s a number of groups working on this, and a number of catalysts that are out there,” Varanasi says. “In all of these, I think the hydrogen co-evolution becomes a bottleneck.”

    One way of counteracting this depletion, they found, can be achieved by a pulsed system — a cycle of simply turning off the voltage, stopping the reaction and giving the carbon dioxide time to spread back into the depleted zone and reach usable levels again, and then resuming the reaction.

    Often, the researchers say, groups have found promising catalyst materials but haven’t run their lab tests long enough to observe these depletion effects, and thus have been frustrated in trying to scale up their systems. Furthermore, the concentration of carbon dioxide next to the catalyst dictates the products that are made. Hence, depletion can also change the mix of products that are produced and can make the process unreliable. “If you want to be able to make a system that works at industrial scale, you need to be able to run things over a long period of time,” Varanasi says, “and you need to not have these kinds of effects that reduce the efficiency or reliability of the process.”

    The team studied three different catalyst materials, including copper, and “we really focused on making sure that we understood and can quantify the depletion effects,” Lake says. In the process they were able to develop a simple and reliable way of monitoring the efficiency of the conversion process as it happens, by measuring the changing pH levels, a measure of acidity, in the system’s electrolyte.

    In their tests, they used more sophisticated analytical tools to characterize reaction products, including gas chromatography for analysis of the gaseous products, and nuclear magnetic resonance characterization for the system’s liquid products. But their analysis showed that the simple pH measurement of the electrolyte next to the electrode during operation could provide a sufficient measure of the efficiency of the reaction as it progressed.

    This ability to easily monitor the reaction in real-time could ultimately lead to a system optimized by machine-learning methods, controlling the production rate of the desired compounds through continuous feedback, Moreno Soto says.

    Now that the process is understood and quantified, other approaches to mitigating the carbon dioxide depletion might be developed, the researchers say, and could easily be tested using their methods.

    This work shows, Lake says, that “no matter what your catalyst material is” in such an electrocatalytic system, “you’ll be affected by this problem.” And now, by using the model they developed, it’s possible to determine exactly what kind of time window needs to be evaluated to get an accurate sense of the material’s overall efficiency and what kind of system operations could maximize its effectiveness.

    The research was supported by Shell, through the MIT Energy Initiative. More

  • in

    Nanograins make for a seismic shift

    In Earth’s crust, tectonic blocks slide and grind past each other like enormous ships loosed from anchor. Earthquakes are generated along these fault zones when enough stress builds for a block to stick, then suddenly slip.

    These slips can be aided by several factors that reduce friction within a fault zone, such as hotter temperatures or pressurized gases that can separate blocks like pucks on an air-hockey table. The decreasing friction enables one tectonic block to accelerate against the other until it runs out of energy. Seismologists have long believed this kind of frictional instability can explain how all crustal earthquakes start. But that might not be the whole story.

    In a study published today in Nature Communications, scientists Hongyu Sun and Matej Pec, from MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), find that ultra-fine-grained crystals within fault zones can behave like low-viscosity fluids. The finding offers an alternative explanation for the instability that leads to crustal earthquakes. It also suggests a link between quakes in the crust and other types of temblors that occur deep in the Earth.

    Nanograins are commonly found in rocks from seismic environments along the smooth surface of “fault mirrors.” These polished, reflective rock faces betray the slipping, sliding forces of past earthquakes. However, it was unclear whether the crystals caused quakes or were merely formed by them.

    To better characterize how these crystals behaved within a fault, the researchers used a planetary ball milling machine to pulverize granite rocks into particles resembling those found in nature. Like a super-powered washing machine filled with ceramic balls, the machine pounded the rock until all its crystals were about 100 nanometers in width, each grain 1/2,000 the size of an average grain of sand.

    After packing the nanopowder into postage-stamp sized cylinders jacketed in gold, the researchers then subjected the material to stresses and heat, creating laboratory miniatures of real fault zones. This process enabled them to isolate the effect of the crystals from the complexity of other factors involved in an actual earthquake.

    The researchers report that the crystals were extremely weak when shearing was initiated — an order of magnitude weaker than more common microcrystals. But the nanocrystals became significantly stronger when the deformation rate was accelerated. Pec, professor of geophysics and the Victor P. Starr Career Development Chair, compares this characteristic, called “rate-strengthening,” to stirring honey in a jar. Stirring the honey slowly is easy, but becomes more difficult the faster you stir.

    The experiment suggests something similar happens in fault zones. As tectonic blocks accelerate past each other, the crystals gum things up between them like honey stirred in a seismic pot.

    Sun, the study’s lead author and EAPS graduate student, explains that their finding runs counter to the dominant frictional weakening theory of how earthquakes start. That theory would predict surfaces of a fault zone have material that gets weaker as the fault block accelerates, and friction should be decreasing. The nanocrystals did just the opposite. However, the crystals’ intrinsic weakness could mean that when enough of them accumulate within a fault, they can give way, causing an earthquake.

    “We don’t totally disagree with the old theorem, but our study really opens new doors to explain the mechanisms of how earthquakes happen in the crust,” Sun says.

    The finding also suggests a previously unrecognized link between earthquakes in the crust and the earthquakes that rumble hundreds of kilometers beneath the surface, where the same tectonic dynamics aren’t at play. That deep, there are no tectonic blocks to grind against each other, and even if there were, the immense pressure would prevent the type of quakes observed in the crust that necessitate some dilatancy and void creation.

    “We know that earthquakes happen all the way down to really big depths where this motion along a frictional fault is basically impossible,” says Pec. “And so clearly, there must be different processes that allow for these earthquakes to happen.”

    Possible mechanisms for these deep-Earth tremors include “phase transitions” which occur due to atomic re-arrangement in minerals and are accompanied by a volume change, and other kinds of metamorphic reactions, such as dehydration of water-bearing minerals, in which the released fluid is pumped through pores and destabilizes a fault. These mechanisms are all characterized by a weak, rate-strengthening layer.

    If weak, rate-strengthening nanocrystals are abundant in the deep Earth, they could present another possible mechanism, says Pec. “Maybe crustal earthquakes are not a completely different beast than the deeper earthquakes. Maybe they have something in common.” More