More stories

  • in

    SMART Innovation Center awarded five-year NRF grant for new deep tech ventures

    The Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore has announced a five-year grant awarded to the SMART Innovation Center (SMART IC) by the National Research Foundation Singapore (NRF) as part of its Research, Innovation and Enterprise 2025 Plan. The SMART IC plays a key role in accelerating innovation and entrepreneurship in Singapore and will channel the grant toward refining and commercializing developments in the field of deep technologies through financial support and training.

    Singapore has recently expanded its innovation ecosystem to hone deep technologies to solve complex problems in areas of pivotal importance. While there has been increased support for deep tech here, with investments in deep tech startups surging from $324 million in 2020 to $861 million in 2021, startups of this nature tend to take a longer time to scale, get acquired, or get publicly listed due to increased time, labor, and capital needed. By providing researchers with financial and strategic support from the early stages of their research and development, the SMART IC hopes to accelerate this process and help bring new and disruptive technologies to the market.

    “SMART’s Innovation Center prides itself as being one of the key drivers of research and innovation, by identifying and nurturing emerging technologies and accelerating them towards commercialization,” says Howard Califano, director of SMART IC. “With the support of the NRF, we look forward to another five years of further growing the ecosystem by ensuring an environment where research — and research funds — are properly directed to what the market and society need. This is how we will be able to solve problems faster and more efficiently, and ensure that value is generated from scientific research.”

    Set up in 2009 by MIT and funded by the NRF, the SMART IC furthers SMART’s goals by nurturing promising and innovative technologies that faculty and research teams in Singapore are working on. Some emerging technologies include, but are not limited to, biotechnology, biomedical devices, information technology, new materials, nanotechnology, and energy innovations.

    Having trained over 300 postdocs since its inception, the SMART IC has supported the launch of 55 companies that have created over 3,300 jobs. Some of these companies were spearheaded by SMART’s interdisciplinary research groups, including biotech companies Theonys and Thrixen, autonomous vehicle software company nuTonomy, and integrated circuit company New Silicon. During the RIE 2020 period, 66 Ignition Grants and 69 Innovation Grants were awarded to SMART’s researchers, as well as faculty at other Singapore universities and research institutes.

    The following four programs are open to researchers from education and research facilities, as well as institutes of higher learning, in Singapore:

    Innovation Grant 2.0: The enhanced SMART Innovation Center’s flagship program, the Innovation Grant 2.0, is a gated three-phase program focused on enabling scientist-entrepreneurs to launch a successful venture, with training and intense monitoring across all phases. This grant program can provide up to $800,000 Singaporean dollars and is open to all areas of deep technology (engineering, artificial intelligence, biomedical, new materials, etc). The first grant call for the Innovation Grant 2.0 is open through Oct. 15. Researchers, scientists, and engineers at Singapore’s public institutions of higher learning, research centers, public hospitals, and medical research centers — especially those working on disruptive technologies with commercial potential — are invited to apply for the Innovation Grant 2.0.

    I2START Grant: In collaboration with SMART, the National Health Innovation Center Singapore, and Enterprise Singapore, this novel integrated program will develop master classes on venture building, with a focus on medical devices, diagnostics, and medical technologies. The grant amount is up to S$1,350,000. Applications are accepted throughout the year.

    STDR Stream 2: The Singapore Therapeutics Development Review (STDR) program is jointly operated by SMART, the Agency for Science, Technology and Research (A*STAR), and the Experimental Drug Development Center. The grant is available in two phases, a pre-pilot phase of S$100,000 and a Pilot phase of S$830,000, with a potential combined total of up to S$930,000. The next STDR Pre-Pilot grant call will open on Sept. 15.

    Central Gap Fund: The SMART IC is an Innovation and Enterprise Office under the NRF’s Central Gap Fund. This program helps projects that have already received an Innovation 2.0, STDR Stream 2, or I2START Grant but require additional funding to bridge to seed or Series A funding, with possible funding of up to S$5 million. Applications are accepted throughout the year.

    The SMART IC will also continue developing robust entrepreneurship mentorship programs and regular industry events to encourage closer collaboration among faculty innovators and the business community.

    “SMART, through the Innovation Center, is honored to be able to help researchers take these revolutionary technologies to the marketplace, where they can contribute to the economy and society. The projects we fund are commercialized in Singapore, ensuring that the local economy is the first to benefit,” says Eugene Fitzgerald, chief executive officer and director of SMART, and professor of materials science and engineering at MIT.

    SMART was established by MIT and the NRF in 2007 and serves as an intellectual and innovation hub for cutting-edge research of interest to both parties. SMART is the first entity in the Campus for Research Excellence and Technological Enterprise. SMART currently comprises an Innovation Center and five Interdisciplinary Research Groups: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

    The SMART IC was set up by MIT and the NRF in 2009. It identifies and nurtures a broad range of emerging technologies including but not limited to biotechnology, biomedical devices, information technology, new materials, nanotechnology, and energy innovations, and accelerates them toward commercialization. The SMART IC runs a rigorous grant system that identifies and funds promising projects to help them de-risk their technologies, conduct proof-of-concept experiments, and determine go-to-market strategies. It also prides itself on robust entrepreneurship boot camps and mentorship, and frequent industry events to encourage closer collaboration among faculty innovators and the business community. SMART’s Innovation grant program is the only scheme that is open to all institutes of higher learning and research institutes across Singapore. More

  • in

    A simple way to significantly increase lifetimes of fuel cells and other devices

    In research that could jump-start work on a range of technologies including fuel cells, which are key to storing solar and wind energy, MIT researchers have found a relatively simple way to increase the lifetimes of these devices: changing the pH of the system.

    Fuel and electrolysis cells made of materials known as solid metal oxides are of interest for several reasons. For example, in the electrolysis mode, they are very efficient at converting electricity from a renewable source into a storable fuel like hydrogen or methane that can be used in the fuel cell mode to generate electricity when the sun isn’t shining or the wind isn’t blowing. They can also be made without using costly metals like platinum. However, their commercial viability has been hampered, in part, because they degrade over time. Metal atoms seeping from the interconnects used to construct banks of fuel/electrolysis cells slowly poison the devices.

    “What we’ve been able to demonstrate is that we can not only reverse that degradation, but actually enhance the performance above the initial value by controlling the acidity of the air-electrode interface,” says Harry L. Tuller, the R.P. Simmons Professor of Ceramics and Electronic Materials in MIT’s Department of Materials Science and Engineering (DMSE).

    The research, initially funded by the U.S. Department of Energy through the Office of Fossil Energy and Carbon Management’s (FECM) National Energy Technology Laboratory, should help the department meet its goal of significantly cutting the degradation rate of solid oxide fuel cells by 2035 to 2050.

    “Extending the lifetime of solid oxide fuels cells helps deliver the low-cost, high-efficiency hydrogen production and power generation needed for a clean energy future,” says Robert Schrecengost, acting director of FECM’s Division of Hydrogen with Carbon Management. “The department applauds these advancements to mature and ultimately commercialize these technologies so that we can provide clean and reliable energy for the American people.”

    “I’ve been working in this area my whole professional life, and what I’ve seen until now is mostly incremental improvements,” says Tuller, who was recently named a 2022 Materials Research Society Fellow for his career-long work in solid-state chemistry and electrochemistry. “People are normally satisfied with seeing improvements by factors of tens-of-percent. So, actually seeing much larger improvements and, as importantly, identifying the source of the problem and the means to work around it, issues that we’ve been struggling with for all these decades, is remarkable.”

    Says James M. LeBeau, the John Chipman Associate Professor of Materials Science and Engineering at MIT, who was also involved in the research, “This work is important because it could overcome [some] of the limitations that have prevented the widespread use of solid oxide fuel cells. Additionally, the basic concept can be applied to many other materials used for applications in the energy-related field.”

    A report describing the work was reported Aug. 11, in Energy & Environmental Science. Additional authors of the paper are Han Gil Seo, a DMSE postdoc; Anna Staerz, formerly a DMSE postdoc, now at Interuniversity Microelectronics Centre (IMEC) Belgium and soon to join the Colorado School of Mines faculty; Dennis S. Kim, a DMSE postdoc; Dino Klotz, a DMSE visiting scientist, now at Zurich Instruments; Michael Xu, a DMSE graduate student; and Clement Nicollet, formerly a DMSE postdoc, now at the Université de Nantes. Seo and Staerz contributed equally to the work.

    Changing the acidity

    A fuel/electrolysis cell has three principal parts: two electrodes (a cathode and anode) separated by an electrolyte. In the electrolysis mode, electricity from, say, the wind, can be used to generate storable fuel like methane or hydrogen. On the other hand, in the reverse fuel cell reaction, that storable fuel can be used to create electricity when the wind isn’t blowing.

    A working fuel/electrolysis cell is composed of many individual cells that are stacked together and connected by steel metal interconnects that include the element chrome to keep the metal from oxidizing. But “it turns out that at the high temperatures that these cells run, some of that chrome evaporates and migrates to the interface between the cathode and the electrolyte, poisoning the oxygen incorporation reaction,” Tuller says. After a certain point, the efficiency of the cell has dropped to a point where it is not worth operating any longer.

    “So if you can extend the life of the fuel/electrolysis cell by slowing down this process, or ideally reversing it, you could go a long way towards making it practical,” Tuller says.

    The team showed that you can do both by controlling the acidity of the cathode surface. They also explained what is happening.

    To achieve their results, the team coated the fuel/electrolysis cell cathode with lithium oxide, a compound that changes the relative acidity of the surface from being acidic to being more basic. “After adding a small amount of lithium, we were able to recover the initial performance of a poisoned cell,” Tuller says. When the engineers added even more lithium, the performance improved far beyond the initial value. “We saw improvements of three to four orders of magnitude in the key oxygen reduction reaction rate and attribute the change to populating the surface of the electrode with electrons needed to drive the oxygen incorporation reaction.”

    The engineers went on to explain what is happening by observing the material at the nanoscale, or billionths of a meter, with state-of-the-art transmission electron microscopy and electron energy loss spectroscopy at MIT.nano. “We were interested in understanding the distribution of the different chemical additives [chromium and lithium oxide] on the surface,” says LeBeau.

    They found that the lithium oxide effectively dissolves the chromium to form a glassy material that no longer serves to degrade the cathode performance.

    Applications for sensors, catalysts, and more

    Many technologies like fuel cells are based on the ability of the oxide solids to rapidly breathe oxygen in and out of their crystalline structures, Tuller says. The MIT work essentially shows how to recover — and speed up — that ability by changing the surface acidity. As a result, the engineers are optimistic that the work could be applied to other technologies including, for example, sensors, catalysts, and oxygen permeation-based reactors.

    The team is also exploring the effect of acidity on systems poisoned by different elements, like silica.

    Concludes Tuller: “As is often the case in science, you stumble across something and notice an important trend that was not appreciated previously. Then you test that concept further, and you discover that it is really very fundamental.”

    In addition to the DOE, this work was also funded by the National Research Foundation of Korea, the MIT Department of Materials Science and Engineering via Tuller’s appointment as the R.P. Simmons Professor of Ceramics and Electronic Materials, and the U.S. Air Force Office of Scientific Research. More

  • in

    New hardware offers faster computation for artificial intelligence, with much less energy

    As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

    Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

    A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

    Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

    “With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

    “The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

    “The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

    These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

    “Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

    Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

    Accelerating deep learning

    Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

    The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

    In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

    The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

    To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

    PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

    Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

    Surprising speed

    PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

    “The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

    “The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

    Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

    Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

    Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

    At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

    “Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

    “The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

    “Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

    “This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

    This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Explained: Why perovskites could take solar cells to new heights

    Perovskites hold promise for creating solar panels that could be easily deposited onto most surfaces, including flexible and textured ones. These materials would also be lightweight, cheap to produce, and as efficient as today’s leading photovoltaic materials, which are mainly silicon. They’re the subject of increasing research and investment, but companies looking to harness their potential do have to address some remaining hurdles before perovskite-based solar cells can be commercially competitive.

    The term perovskite refers not to a specific material, like silicon or cadmium telluride, other leading contenders in the photovoltaic realm, but to a whole family of compounds. The perovskite family of solar materials is named for its structural similarity to a mineral called perovskite, which was discovered in 1839 and named after Russian mineralogist L.A. Perovski.

    The original mineral perovskite, which is calcium titanium oxide (CaTiO3), has a distinctive crystal configuration. It has a three-part structure, whose components have come to be labeled A, B and X, in which lattices of the different components are interlaced. The family of perovskites consists of the many possible combinations of elements or molecules that can occupy each of the three components and form a structure similar to that of the original perovskite itself. (Some researchers even bend the rules a little by naming other crystal structures with similar elements “perovskites,” although this is frowned upon by crystallographers.)

    “You can mix and match atoms and molecules into the structure, with some limits. For instance, if you try to stuff a molecule that’s too big into the structure, you’ll distort it. Eventually you might cause the 3D crystal to separate into a 2D layered structure, or lose ordered structure entirely,” says Tonio Buonassisi, professor of mechanical engineering at MIT and director of the Photovoltaics Research Laboratory. “Perovskites are highly tunable, like a build-your-own-adventure type of crystal structure,” he says.

    That structure of interlaced lattices consists of ions or charged molecules, two of them (A and B) positively charged and the other one (X) negatively charged. The A and B ions are typically of quite different sizes, with the A being larger. 

    Within the overall category of perovskites, there are a number of types, including metal oxide perovskites, which have found applications in catalysis and in energy storage and conversion, such as in fuel cells and metal-air batteries. But a main focus of research activity for more than a decade has been on lead halide perovskites, according to Buonassisi says.

    Within that category, there is still a legion of possibilities, and labs around the world are racing through the tedious work of trying to find the variations that show the best performance in efficiency, cost, and durability — which has so far been the most challenging of the three.

    Many teams have also focused on variations that eliminate the use of lead, to avoid its environmental impact. Buonassisi notes, however, that “consistently over time, the lead-based devices continue to improve in their performance, and none of the other compositions got close in terms of electronic performance.” Work continues on exploring alternatives, but for now none can compete with the lead halide versions.

    One of the great advantages perovskites offer is their great tolerance of defects in the structure, he says. Unlike silicon, which requires extremely high purity to function well in electronic devices, perovskites can function well even with numerous imperfections and impurities.

    Searching for promising new candidate compositions for perovskites is a bit like looking for a needle in a haystack, but recently researchers have come up with a machine-learning system that can greatly streamline this process. This new approach could lead to a much faster development of new alternatives, says Buonassisi, who was a co-author of that research.

    While perovskites continue to show great promise, and several companies are already gearing up to begin some commercial production, durability remains the biggest obstacle they face. While silicon solar panels retain up to 90 percent of their power output after 25 years, perovskites degrade much faster. Great progress has been made — initial samples lasted only a few hours, then weeks or months, but newer formulations have usable lifetimes of up to a few years, suitable for some applications where longevity is not essential.

    From a research perspective, Buonassisi says, one advantage of perovskites is that they are relatively easy to make in the lab — the chemical constituents assemble readily. But that’s also their downside: “The material goes together very easily at room temperature,” he says, “but it also comes apart very easily at room temperature. Easy come, easy go!”

    To deal with that issue, most researchers are focused on using various kinds of protective materials to encapsulate the perovskite, protecting it from exposure to air and moisture. But others are studying the exact mechanisms that lead to that degradation, in hopes of finding formulations or treatments that are more inherently robust. A key finding is that a process called autocatalysis is largely to blame for the breakdown.

    In autocatalysis, as soon as one part of the material starts to degrade, its reaction products act as catalysts to start degrading the neighboring parts of the structure, and a runaway reaction gets underway. A similar problem existed in the early research on some other electronic materials, such as organic light-emitting diodes (OLEDs), and was eventually solved by adding additional purification steps to the raw materials, so a similar solution may be found in the case of perovskites, Buonassisi suggests.

    Buonassisi and his co-researchers recently completed a study showing that once perovskites reach a usable lifetime of at least a decade, thanks to their much lower initial cost that would be sufficient to make them economically viable as a substitute for silicon in large, utility-scale solar farms.

    Overall, progress in the development of perovskites has been impressive and encouraging, he says. With just a few years of work, it has already achieved efficiencies comparable to levels that cadmium telluride (CdTe), “which has been around for much longer, is still struggling to achieve,” he says. “The ease with which these higher performances are reached in this new material are almost stupefying.” Comparing the amount of research time spent to achieve a 1 percent improvement in efficiency, he says, the progress on perovskites has been somewhere between 100 and 1000 times faster than that on CdTe. “That’s one of the reasons it’s so exciting,” he says. More

  • in

    MIT engineers design surfaces that make water boil more efficiently

    The boiling of water or other fluids is an energy-intensive step at the heart of a wide range of industrial processes, including most electricity generating plants, many chemical production systems, and even cooling systems for electronics.

    Improving the efficiency of systems that heat and evaporate water could significantly reduce their energy use. Now, researchers at MIT have found a way to do just that, with a specially tailored surface treatment for the materials used in these systems.

    The improved efficiency comes from a combination of three different kinds of surface modifications, at different size scales. The new findings are described in the journal Advanced Materials in a paper by recent MIT graduate Youngsup Song PhD ’21, Ford Professor of Engineering Evelyn Wang, and four others at MIT. The researchers note that this initial finding is still at a laboratory scale, and more work is needed to develop a practical, industrial-scale process.

    There are two key parameters that describe the boiling process: the heat transfer coefficient (HTC) and the critical heat flux (CHF). In materials design, there’s generally a tradeoff between the two, so anything that improves one of these parameters tends to make the other worse. But both are important for the efficiency of the system, and now, after years of work, the team has achieved a way of significantly improving both properties at the same time, through their combination of different textures added to a material’s surface.

    “Both parameters are important,” Song says, “but enhancing both parameters together is kind of tricky because they have intrinsic trade off.” The reason for that, he explains, is “because if we have lots of bubbles on the boiling surface, that means boiling is very efficient, but if we have too many bubbles on the surface, they can coalesce together, which can form a vapor film over the boiling surface.” That film introduces resistance to the heat transfer from the hot surface to the water. “If we have vapor in between the surface and water, that prevents the heat transfer efficiency and lowers the CHF value,” he says.

    Song, who is now a postdoc at Lawrence Berkeley National Laboratory, carried out much of the research as part of his doctoral thesis work at MIT. While the various components of the new surface treatment he developed had been previously studied, the researchers say this work is the first to show that these methods could be combined to overcome the tradeoff between the two competing parameters.

    Adding a series of microscale cavities, or dents, to a surface is a way of controlling the way bubbles form on that surface, keeping them effectively pinned to the locations of the dents and preventing them from spreading out into a heat-resisting film. In this work, the researchers created an array of 10-micrometer-wide dents separated by about 2 millimeters to prevent film formation. But that separation also reduces the concentration of bubbles at the surface, which can reduce the boiling efficiency. To compensate for that, the team introduced a much smaller-scale surface treatment, creating tiny bumps and ridges at the nanometer scale, which increases the surface area and promotes the rate of evaporation under the bubbles.

    In these experiments, the cavities were made in the centers of a series of pillars on the material’s surface. These pillars, combined with nanostructures, promote wicking of liquid from the base to their tops, and this enhances the boiling process by providing more surface area exposed to the water. In combination, the three “tiers” of the surface texture — the cavity separation, the posts, and the nanoscale texturing — provide a greatly enhanced efficiency for the boiling process, Song says.

    “Those micro cavities define the position where bubbles come up,” he says. “But by separating those cavities by 2 millimeters, we separate the bubbles and minimize the coalescence of bubbles.” At the same time, the nanostructures promote evaporation under the bubbles, and the capillary action induced by the pillars supplies liquid to the bubble base. That maintains a layer of liquid water between the boiling surface and the bubbles of vapor, which enhances the maximum heat flux.

    Although their work has confirmed that the combination of these kinds of surface treatments can work and achieve the desired effects, this work was done under small-scale laboratory conditions that could not easily be scaled up to practical devices, Wang says. “These kinds of structures we’re making are not meant to be scaled in its current form,” she says, but rather were used to prove that such a system can work. One next step will be to find alternative ways of creating these kinds of surface textures so these methods could more easily be scaled up to practical dimensions.

    “Showing that we can control the surface in this way to get enhancement is a first step,” she says. “Then the next step is to think about more scalable approaches.” For example, though the pillars on the surface in these experiments were created using clean-room methods commonly used to produce semiconductor chips, there are other, less demanding ways of creating such structures, such as electrodeposition. There are also a number of different ways to produce the surface nanostructure textures, some of which may be more easily scalable.

    There may be some significant small-scale applications that could use this process in its present form, such as the thermal management of electronic devices, an area that is becoming more important as semiconductor devices get smaller and managing their heat output becomes ever more important. “There’s definitely a space there where this is really important,” Wang says.

    Even those kinds of applications will take some time to develop because typically thermal management systems for electronics use liquids other than water, known as dielectric liquids. These liquids have different surface tension and other properties than water, so the dimensions of the surface features would have to be adjusted accordingly. Work on these differences is one of the next steps for the ongoing research, Wang says.

    This same multiscale structuring technique could also be applied to different liquids, Song says, by adjusting the dimensions to account for the different properties of the liquids. “Those kinds of details can be changed, and that can be our next step,” he says.

    The team also included Carlos Diaz-Martin, Lenan Zhang, Hyeongyun Cha, and Yajing Zhao, all at MIT. The work was supported by the Advanced Research Projects Agency-Energy (ARPA-E), the Air Force Office of Scientific Research, and the Singapore-MIT Alliance for Research and Technology, and made use of the MIT.nano facilities. More

  • in

    MIT engineers introduce the Oreometer

    When you twist open an Oreo cookie to get to the creamy center, you’re mimicking a standard test in rheology — the study of how a non-Newtonian material flows when twisted, pressed, or otherwise stressed. MIT engineers have now subjected the sandwich cookie to rigorous materials tests to get to the center of a tantalizing question: Why does the cookie’s cream stick to just one wafer when twisted apart?

    “There’s the fascinating problem of trying to get the cream to distribute evenly between the two wafers, which turns out to be really hard,” says Max Fan, an undergraduate in MIT’s Department of Mechanical Engineering.

    In pursuit of an answer, the team subjected cookies to standard rheology tests in the lab and found that no matter the flavor or amount of stuffing, the cream at the center of an Oreo almost always sticks to one wafer when twisted open. Only for older boxes of cookies does the cream sometimes separate more evenly between both wafers.

    The researchers also measured the torque required to twist open an Oreo, and found it to be similar to the torque required to turn a doorknob and about 1/10th what’s needed to twist open a bottlecap. The cream’s failure stress — i.e. the force per area required to get the cream to flow, or deform — is twice that of cream cheese and peanut butter, and about the same magnitude as mozzarella cheese. Judging from the cream’s response to stress, the team classifies its texture as “mushy,” rather than brittle, tough, or rubbery.

    So, why does the cookie’s cream glom to one side rather than splitting evenly between both? The manufacturing process may be to blame.

    “Videos of the manufacturing process show that they put the first wafer down, then dispense a ball of cream onto that wafer before putting the second wafer on top,” says Crystal Owens, an MIT mechanical engineering PhD candidate who studies the properties of complex fluids. “Apparently that little time delay may make the cream stick better to the first wafer.”

    The team’s study isn’t simply a sweet diversion from bread-and-butter research; it’s also an opportunity to make the science of rheology accessible to others. To that end, the researchers have designed a 3D-printable “Oreometer” — a simple device that firmly grasps an Oreo cookie and uses pennies and rubber bands to control the twisting force that progressively twists the cookie open. Instructions for the tabletop device can be found here.

    The new study, “On Oreology, the fracture and flow of ‘milk’s favorite cookie,’” appears today in Kitchen Flows, a special issue of the journal Physics of Fluids. It was conceived of early in the Covid-19 pandemic, when many scientists’ labs were closed or difficult to access. In addition to Owens and Fan, co-authors are mechanical engineering professors Gareth McKinley and A. John Hart.

    Confection connection

    A standard test in rheology places a fluid, slurry, or other flowable material onto the base of an instrument known as a rheometer. A parallel plate above the base can be lowered onto the test material. The plate is then twisted as sensors track the applied rotation and torque.

    Owens, who regularly uses a laboratory rheometer to test fluid materials such as 3D-printable inks, couldn’t help noting a similarity with sandwich cookies. As she writes in the new study:

    “Scientifically, sandwich cookies present a paradigmatic model of parallel plate rheometry in which a fluid sample, the cream, is held between two parallel plates, the wafers. When the wafers are counter-rotated, the cream deforms, flows, and ultimately fractures, leading to separation of the cookie into two pieces.”

    While Oreo cream may not appear to possess fluid-like properties, it is considered a “yield stress fluid” — a soft solid when unperturbed that can start to flow under enough stress, the way toothpaste, frosting, certain cosmetics, and concrete do.

    Curious as to whether others had explored the connection between Oreos and rheology, Owens found mention of a 2016 Princeton University study in which physicists first reported that indeed, when twisting Oreos by hand, the cream almost always came off on one wafer.

    “We wanted to build on this to see what actually causes this effect and if we could control it if we mounted the Oreos carefully onto our rheometer,” she says.

    Play video

    Cookie twist

    In an experiment that they would repeat for multiple cookies of various fillings and flavors, the researchers glued an Oreo to both the top and bottom plates of a rheometer and applied varying degrees of torque and angular rotation, noting the values  that successfully twisted each cookie apart. They plugged the measurements into equations to calculate the cream’s viscoelasticity, or flowability. For each experiment, they also noted the cream’s “post-mortem distribution,” or where the cream ended up after twisting open.

    In all, the team went through about 20 boxes of Oreos, including regular, Double Stuf, and Mega Stuf levels of filling, and regular, dark chocolate, and “golden” wafer flavors. Surprisingly, they found that no matter the amount of cream filling or flavor, the cream almost always separated onto one wafer.

    “We had expected an effect based on size,” Owens says. “If there was more cream between layers, it should be easier to deform. But that’s not actually the case.”

    Curiously, when they mapped each cookie’s result to its original position in the box, they noticed the cream tended to stick to the inward-facing wafer: Cookies on the left side of the box twisted such that the cream ended up on the right wafer, whereas cookies on the right side separated with cream mostly on the left wafer. They suspect this box distribution may be a result of post-manufacturing environmental effects, such as heating or jostling that may cause cream to peel slightly away from the outer wafers, even before twisting.

    The understanding gained from the properties of Oreo cream could potentially be applied to the design of other complex fluid materials.

    “My 3D printing fluids are in the same class of materials as Oreo cream,” she says. “So, this new understanding can help me better design ink when I’m trying to print flexible electronics from a slurry of carbon nanotubes, because they deform in almost exactly the same way.”

    As for the cookie itself, she suggests that if the inside of Oreo wafers were more textured, the cream might grip better onto both sides and split more evenly when twisted.

    “As they are now, we found there’s no trick to twisting that would split the cream evenly,” Owens concludes.

    This research was supported, in part, by the MIT UROP program and by the National Defense Science and Engineering Graduate Fellowship Program. More

  • in

    A new heat engine with no moving parts is as efficient as a steam turbine

    Engineers at MIT and the National Renewable Energy Laboratory (NREL) have designed a heat engine with no moving parts. Their new demonstrations show that it converts heat to electricity with over 40 percent efficiency — a performance better than that of traditional steam turbines.

    The heat engine is a thermophotovoltaic (TPV) cell, similar to a solar panel’s photovoltaic cells, that passively captures high-energy photons from a white-hot heat source and converts them into electricity. The team’s design can generate electricity from a heat source of between 1,900 to 2,400 degrees Celsius, or up to about 4,300 degrees Fahrenheit.

    The researchers plan to incorporate the TPV cell into a grid-scale thermal battery. The system would absorb excess energy from renewable sources such as the sun and store that energy in heavily insulated banks of hot graphite. When the energy is needed, such as on overcast days, TPV cells would convert the heat into electricity, and dispatch the energy to a power grid.

    With the new TPV cell, the team has now successfully demonstrated the main parts of the system in separate, small-scale experiments. They are working to integrate the parts to demonstrate a fully operational system. From there, they hope to scale up the system to replace fossil-fuel-driven power plants and enable a fully decarbonized power grid, supplied entirely by renewable energy.

    “Thermophotovoltaic cells were the last key step toward demonstrating that thermal batteries are a viable concept,” says Asegun Henry, the Robert N. Noyce Career Development Professor in MIT’s Department of Mechanical Engineering. “This is an absolutely critical step on the path to proliferate renewable energy and get to a fully decarbonized grid.”

    Henry and his collaborators have published their results today in the journal Nature. Co-authors at MIT include Alina LaPotin, Kevin Schulte, Kyle Buznitsky, Colin Kelsall, Andrew Rohskopf, and Evelyn Wang, the Ford Professor of Engineering and head of the Department of Mechanical Engineering, along with collaborators at NREL in Golden, Colorado.

    Jumping the gap

    More than 90 percent of the world’s electricity comes from sources of heat such as coal, natural gas, nuclear energy, and concentrated solar energy. For a century, steam turbines have been the industrial standard for converting such heat sources into electricity.

    On average, steam turbines reliably convert about 35 percent of a heat source into electricity, with about 60 percent representing the highest efficiency of any heat engine to date. But the machinery depends on moving parts that are temperature- limited. Heat sources higher than 2,000 degrees Celsius, such as Henry’s proposed thermal battery system, would be too hot for turbines.

    In recent years, scientists have looked into solid-state alternatives — heat engines with no moving parts, that could potentially work efficiently at higher temperatures.

    “One of the advantages of solid-state energy converters are that they can operate at higher temperatures with lower maintenance costs because they have no moving parts,” Henry says. “They just sit there and reliably generate electricity.”

    Thermophotovoltaic cells offered one exploratory route toward solid-state heat engines. Much like solar cells, TPV cells could be made from semiconducting materials with a particular bandgap — the gap between a material’s valence band and its conduction band. If a photon with a high enough energy is absorbed by the material, it can kick an electron across the bandgap, where the electron can then conduct, and thereby generate electricity — doing so without moving rotors or blades.

    To date, most TPV cells have only reached efficiencies of around 20 percent, with the record at 32 percent, as they have been made of relatively low-bandgap materials that convert lower-temperature, low-energy photons, and therefore convert energy less efficiently.

    Catching light

    In their new TPV design, Henry and his colleagues looked to capture higher-energy photons from a higher-temperature heat source, thereby converting energy more efficiently. The team’s new cell does so with higher-bandgap materials and multiple junctions, or material layers, compared with existing TPV designs.

    The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold. The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.

    The team tested the cell’s efficiency by placing it over a heat flux sensor — a device that directly measures the heat absorbed from the cell. They exposed the cell to a high-temperature lamp and concentrated the light onto the cell. They then varied the bulb’s intensity, or temperature, and observed how the cell’s power efficiency — the amount of power it produced, compared with the heat it absorbed — changed with temperature. Over a range of 1,900 to 2,400 degrees Celsius, the new TPV cell maintained an efficiency of around 40 percent.

    “We can get a high efficiency over a broad range of temperatures relevant for thermal batteries,” Henry says.

    The cell in the experiments is about a square centimeter. For a grid-scale thermal battery system, Henry envisions the TPV cells would have to scale up to about 10,000 square feet (about a quarter of a football field), and would operate in climate-controlled warehouses to draw power from huge banks of stored solar energy. He points out that an infrastructure exists for making large-scale photovoltaic cells, which could also be adapted to manufacture TPVs.

    “There’s definitely a huge net positive here in terms of sustainability,” Henry says. “The technology is safe, environmentally benign in its life cycle, and can have a tremendous impact on abating carbon dioxide emissions from electricity production.”

    This research was supported, in part, by the U.S. Department of Energy. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More