More stories

  • in

    Making hydropower plants more sustainable

    Growing up on a farm in Texas, there was always something for siblings Gia Schneider ’99 and Abe Schneider ’02, SM ’03 to do. But every Saturday at 2 p.m., no matter what, the family would go down to a local creek to fish, build rock dams and rope swings, and enjoy nature.

    Eventually the family began going to a remote river in Colorado each summer. The river forked in two; one side was managed by ranchers who destroyed natural features like beaver dams, while the other side remained untouched. The family noticed the fishing was better on the preserved side, which led Abe to try measuring the health of the two river ecosystems. In high school, he co-authored a study showing there were more beneficial insects in the bed of the river with the beaver dams.

    The experience taught both siblings a lesson that has stuck. Today they are the co-founders of Natel Energy, a company attempting to mimic natural river ecosystems with hydropower systems that are more sustainable than conventional hydro plants.

    “The big takeaway for us, and what we’ve been doing all this time, is thinking of ways that infrastructure can help increase the health of our environment — and beaver dams are a good example of infrastructure that wouldn’t otherwise be there that supports other populations of animals,” Abe says. “It’s a motivator for the idea that hydropower can help improve the environment rather than destroy the environment.”

    Through new, fish-safe turbines and other features designed to mimic natural river conditions, the founders say their plants can bridge the gap between power-plant efficiency and environmental sustainability. By retrofitting existing hydropower plants and developing new projects, the founders believe they can supercharge a hydropower industry that is by far the largest source of renewable electricity in the world but has not grown in energy generation as much as wind and solar in recent years.

    “Hydropower plants are built today with only power output in mind, as opposed to the idea that if we want to unlock growth, we have to solve for both efficiency and river sustainability,” Gia says.

    A life’s mission

    The origins of Natel came not from a single event but from a lifetime of events. Abe and Gia’s father was an inventor and renewable energy enthusiast who designed and built the log cabin they grew up in. With no television, the kids’ preferred entertainment was reading books or being outside. The water in their house was pumped by power generated using a mechanical windmill on the north side of the house.

    “We grew up hanging clothes on a line, and it wasn’t because we were too poor to own a dryer, but because everything about our existence and our use of energy was driven by the idea that we needed to make conscious decisions about sustainability,” Abe says.

    One of the things that fascinated both siblings was hydropower. In high school, Abe recalls bugging his friend who was good at math to help him with designs for new hydro turbines.

    Both siblings admit coming to MIT was a major culture shock, but they loved the atmosphere of problem solving and entrepreneurship that permeated the campus. Gia came to MIT in 1995 and majored in chemical engineering while Abe followed three years later and majored in mechanical engineering for both his bachelor’s and master’s degrees.

    All the while, they never lost sight of hydropower. In the 1998 MIT $100K Entrepreneurship Competitions (which was the $50K at the time), they pitched an idea for hydropower plants based on a linear turbine design. They were named finalists in the competition, but still wanted more industry experience before starting a company. After graduation, Abe worked as a mechanical engineer and did some consulting work with the operators of small hydropower plants while Gia worked at the energy desks of a few large finance companies.

    In 2009, the siblings, along with their late father, Daniel, received a small business grant of $200,000 and formally launched Natel Energy.

    Between 2009 and 2019, the founders worked on a linear turbine design that Abe describes as turbines on a conveyor belt. They patented and deployed the system on a few sites, but the problem of ensuring safe fish passage remained.

    Then the founders were doing some modeling that suggested they could achieve high power plant efficiency using an extremely rounded edge on a turbine blade — as opposed to the sharp blades typically used for hydropower turbines. The insight made them realize if they didn’t need sharp blades, perhaps they didn’t need a complex new turbine.

    “It’s so counterintuitive, but we said maybe we can achieve the same results with a propeller turbine, which is the most common kind,” Abe says. “It started out as a joke — or a challenge — and I did some modeling and rapidly realized, ‘Holy cow, this actually could work!’ Instead of having a powertrain with a decade’s worth of complexity, you have a powertrain that has one moving part, and almost no change in loading, in a form factor that the whole industry is used to.”

    The turbine Natel developed features thick blades that allow more than 99 percent of fish to pass through safely, according to third-party tests. Natel’s turbines also allow for the passage of important river sediment and can be coupled with structures that mimic natural features of rivers like log jams, beaver dams, and rock arches.

    “We want the most efficient machine possible, but we also want the most fish-safe machine possible, and that intersection has led to our unique intellectual property,” Gia says.

    Supercharging hydropower

    Natel has already installed two versions of its latest turbine, what it calls the Restoration Hydro Turbine, at existing plants in Maine and Oregon. The company hopes that by the end of this year, two more will be deployed, including one in Europe, a key market for Natel because of its stronger environmental regulations for hydropower plants.

    Since their installation, the founders say the first two turbines have converted more than 90 percent of the energy available in the water into energy at the turbine, a comparable efficiency to conventional turbines.

    Looking forward, Natel believes its systems have a significant role to play in boosting the hydropower industry, which is facing increasing scrutiny and environmental regulation that could otherwise close down many existing plants. For example, the founders say that hydropower plants the company could potentially retrofit across the U.S. and Europe have a total capacity of about 30 gigawatts, enough to power millions of homes.

    Natel also has ambitions to build entirely new plants on the many nonpowered dams around the U.S. and Europe. (Currently only 3 percent of the United States’ 80,000 dams are powered.) The founders estimate their systems could generate about 48 gigawatts of new electricity across the U.S. and Europe — the equivalent of more than 100 million solar panels.

    “We’re looking at numbers that are pretty meaningful,” Gia says. “We could substantially add to the existing installed base while also modernizing the existing base to continue to be productive while meeting modern environmental requirements.”

    Overall, the founders see hydropower as a key technology in our transition to sustainable energy, a sentiment echoed by recent MIT research.

    “Hydro today supplies the bulk of electricity reliability services in a lot of these areas — things like voltage regulation, frequency regulation, storage,” Gia says. “That’s key to understand: As we transition to a zero-carbon grid, we need a reliable grid, and hydro has a very important role in supporting that. Particularly as we think about making this transition as quickly as we can, we’re going to need every bit of zero-emission resources we can get.” More

  • in

    A better way to quantify radiation damage in materials

    It was just a piece of junk sitting in the back of a lab at the MIT Nuclear Reactor facility, ready to be disposed of. But it became the key to demonstrating a more comprehensive way of detecting atomic-level structural damage in materials — an approach that will aid the development of new materials, and could potentially support the ongoing operation of carbon-emission-free nuclear power plants, which would help alleviate global climate change.

    A tiny titanium nut that had been removed from inside the reactor was just the kind of material needed to prove that this new technique, developed at MIT and at other institutions, provides a way to probe defects created inside materials, including those that have been exposed to radiation, with five times greater sensitivity than existing methods.

    The new approach revealed that much of the damage that takes place inside reactors is at the atomic scale, and as a result is difficult to detect using existing methods. The technique provides a way to directly measure this damage through the way it changes with temperature. And it could be used to measure samples from the currently operating fleet of nuclear reactors, potentially enabling the continued safe operation of plants far beyond their presently licensed lifetimes.

    The findings are reported today in the journal Science Advances in a paper by MIT research specialist and recent graduate Charles Hirst PhD ’22; MIT professors Michael Short, Scott Kemp, and Ju Li; and five others at the University of Helsinki, the Idaho National Laboratory, and the University of California at Irvine.

    Rather than directly observing the physical structure of a material in question, the new approach looks at the amount of energy stored within that structure. Any disruption to the orderly structure of atoms within the material, such as that caused by radiation exposure or by mechanical stresses, actually imparts excess energy to the material. By observing and quantifying that energy difference, it’s possible to calculate the total amount of damage within the material — even if that damage is in the form of atomic-scale defects that are too small to be imaged with microscopes or other detection methods.

    The principle behind this method had been worked out in detail through calculations and simulations. But it was the actual tests on that one titanium nut from the MIT nuclear reactor that provided the proof — and thus opened the door to a new way of measuring damage in materials.

    The method they used is called differential scanning calorimetry. As Hirst explains, this is similar in principle to the calorimetry experiments many students carry out in high school chemistry classes, where they measure how much energy it takes to raise the temperature of a gram of water by one degree. The system the researchers used was “fundamentally the exact same thing, measuring energetic changes. … I like to call it just a fancy furnace with a thermocouple inside.”

    The scanning part has to do with gradually raising the temperature a bit at a time and seeing how the sample responds, and the differential part refers to the fact that two identical chambers are measured at once, one empty, and one containing the sample being studied. The difference between the two reveals details of the energy of the sample, Hirst explains.

    “We raise the temperature from room temperature up to 600 degrees Celsius, at a constant rate of 50 degrees per minute,” he says. Compared to the empty vessel, “your material will naturally lag behind because you need energy to heat your material. But if there are changes in the energy inside the material, that will change the temperature. In our case, there was an energy release when the defects recombine, and then it will get a little bit of a head start on the furnace … and that’s how we are measuring the energy in our sample.”

    Hirst, who carried out the work over a five-year span as his doctoral thesis project, found that contrary to what had been believed, the irradiated material showed that there were two different mechanisms involved in the relaxation of defects in titanium at the studied temperatures, revealed by two separate peaks in calorimetry. “Instead of one process occurring, we clearly saw two, and each of them corresponds to a different reaction that’s happening in the material,” he says.

    They also found that textbook explanations of how radiation damage behaves with temperature weren’t accurate, because previous tests had mostly been carried out at extremely low temperatures and then extrapolated to the higher temperatures of real-life reactor operations. “People weren’t necessarily aware that they were extrapolating, even though they were, completely,” Hirst says.

    “The fact is that our common-knowledge basis for how radiation damage evolves is based on extremely low-temperature electron radiation,” adds Short. “It just became the accepted model, and that’s what’s taught in all the books. It took us a while to realize that our general understanding was based on a very specific condition, designed to elucidate science, but generally not applicable to conditions in which we actually want to use these materials.”

    Now, the new method can be applied “to materials plucked from existing reactors, to learn more about how they are degrading with operation,” Hirst says.

    “The single biggest thing the world can do in order to get cheap, carbon-free power is to keep current reactors on the grid. They’re already paid for, they’re working,” Short adds.  But to make that possible, “the only way we can keep them on the grid is to have more certainty that they will continue to work well.” And that’s where this new way of assessing damage comes into play.

    While most nuclear power plants have been licensed for 40 to 60 years of operation, “we’re now talking about running those same assets out to 100 years, and that depends almost fully on the materials being able to withstand the most severe accidents,” Short says. Using this new method, “we can inspect them and take them out before something unexpected happens.”

    In practice, plant operators could remove a tiny sample of material from critical areas of the reactor, and analyze it to get a more complete picture of the condition of the overall reactor. Keeping existing reactors running is “the single biggest thing we can do to keep the share of carbon-free power high,” Short stresses. “This is one way we think we can do that.”

    Sergei Dudarev, a fellow at the United Kingdom Atomic Energy Authority who was not associated with this work, says this “is likely going to be impactful, as it confirms, in a nice systematic manner, supported both by experiment and simulations, the unexpectedly significant part played by the small invisible defects in microstructural evolution of materials exposed to irradiation.”

    The process is not just limited to the study of metals, nor is it limited to damage caused by radiation, the researchers say. In principle, the method could be used to measure other kinds of defects in materials, such as those caused by stresses or shockwaves, and it could be applied to materials such as ceramics or semiconductors as well.

    In fact, Short says, metals are the most difficult materials to measure with this method, and early on other researchers kept asking why this team was focused on damage to metals. That was partly because reactor components tend to be made of metal, and also because “It’s the hardest, so, if we crack this problem, we have a tool to crack them all!”

    Measuring defects in other kinds of materials can be up to 10,000 times easier than in metals, he says. “If we can do this with metals, we can make this extremely, ubiquitously applicable.” And all of it enabled by a small piece of junk that was sitting at the back of a lab.

    The research team included Fredric Granberg and Kai Nordlund at the University of Helsinki in Finland; Boopathy Kombaiah and Scott Middlemas at Idaho National Laboratory; and Penghui Cao at the University of California at Irvine. The work was supported by the U.S. National Science Foundation, an Idaho National Laboratory research grant, and a Euratom Research and Training program grant. More

  • in

    New hardware offers faster computation for artificial intelligence, with much less energy

    As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

    Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

    A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

    Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

    “With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

    “The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

    “The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

    These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

    “Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

    Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

    Accelerating deep learning

    Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

    The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

    In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

    The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

    To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

    PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

    Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

    Surprising speed

    PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

    “The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

    “The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

    Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

    Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

    Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

    At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

    “Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

    “The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

    “Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

    “This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

    This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Fusion’s newest ambassador

    When high school senior Tuba Balta emailed MIT Plasma Science and Fusion Center (PSFC) Director Dennis Whyte in February, she was not certain she would get a response. As part of her final semester at BASIS Charter School, in Washington, she had been searching unsuccessfully for someone to sponsor an internship in fusion energy, a topic that had recently begun to fascinate her because “it’s not figured out yet.” Time was running out if she was to include the internship as part of her senior project.

    “I never say ‘no’ to a student,” says Whyte, who felt she could provide a youthful perspective on communicating the science of fusion to the general public.

    Posters explaining the basics of fusion science were being considered for the walls of a PSFC lounge area, a space used to welcome visitors who might not know much about the center’s focus: What is fusion? What is plasma? What is magnetic confinement fusion? What is a tokamak?

    Why couldn’t Balta be tasked with coming up with text for these posters, written specifically to be understandable, even intriguing, to her peers?

    Meeting the team

    Although most of the internship would be virtual, Balta visited MIT to meet Whyte and others who would guide her progress. A tour of the center showed her the past and future of the PSFC, one lab area revealing on her left the remains of the decades-long Alcator C-Mod tokamak and on her right the testing area for new superconducting magnets crucial to SPARC, designed in collaboration with MIT spinoff Commonwealth Fusion Systems.

    With Whyte, graduate student Rachel Bielajew, and Outreach Coordinator Paul Rivenberg guiding her content and style, Balta focused on one of eight posters each week. Her school also required her to keep a weekly blog of her progress, detailing what she was learning in the process of creating the posters.

    Finding her voice

    Balta admits that she was not looking forward to this part of the school assignment. But she decided to have fun with it, adopting an enthusiastic and conversational tone, as if she were sitting with friends around a lunch table. Each week, she was able to work out what she was composing for her posters and her final project by trying it out on her friends in the blog.

    Her posts won praise from her schoolmates for their clarity, as when in Week 3 she explained the concept of turbulence as it relates to fusion research, sending her readers to their kitchen faucets to experiment with the pressure and velocity of running tap water.

    The voice she found through her blog served her well during her final presentation about fusion at a school expo for classmates, parents, and the general public.

    “Most people are intimidated by the topic, which they shouldn’t be,” says Balta. “And it just made me happy to help other people understand it.”

    Her favorite part of the internship? “Getting to talk to people whose papers I was reading and ask them questions. Because when it comes to fusion, you can’t just look it up on Google.”

    Awaiting her first year at the University of Chicago, Balta reflects on the team spirit she experienced in communicating with researchers at the PSFC.

    “I think that was one of my big takeaways,” she says, “that you have to work together. And you should, because you’re always going to be missing some piece of information; but there’s always going to be somebody else who has that piece, and we can all help each other out.” More

  • in

    Explained: Why perovskites could take solar cells to new heights

    Perovskites hold promise for creating solar panels that could be easily deposited onto most surfaces, including flexible and textured ones. These materials would also be lightweight, cheap to produce, and as efficient as today’s leading photovoltaic materials, which are mainly silicon. They’re the subject of increasing research and investment, but companies looking to harness their potential do have to address some remaining hurdles before perovskite-based solar cells can be commercially competitive.

    The term perovskite refers not to a specific material, like silicon or cadmium telluride, other leading contenders in the photovoltaic realm, but to a whole family of compounds. The perovskite family of solar materials is named for its structural similarity to a mineral called perovskite, which was discovered in 1839 and named after Russian mineralogist L.A. Perovski.

    The original mineral perovskite, which is calcium titanium oxide (CaTiO3), has a distinctive crystal configuration. It has a three-part structure, whose components have come to be labeled A, B and X, in which lattices of the different components are interlaced. The family of perovskites consists of the many possible combinations of elements or molecules that can occupy each of the three components and form a structure similar to that of the original perovskite itself. (Some researchers even bend the rules a little by naming other crystal structures with similar elements “perovskites,” although this is frowned upon by crystallographers.)

    “You can mix and match atoms and molecules into the structure, with some limits. For instance, if you try to stuff a molecule that’s too big into the structure, you’ll distort it. Eventually you might cause the 3D crystal to separate into a 2D layered structure, or lose ordered structure entirely,” says Tonio Buonassisi, professor of mechanical engineering at MIT and director of the Photovoltaics Research Laboratory. “Perovskites are highly tunable, like a build-your-own-adventure type of crystal structure,” he says.

    That structure of interlaced lattices consists of ions or charged molecules, two of them (A and B) positively charged and the other one (X) negatively charged. The A and B ions are typically of quite different sizes, with the A being larger. 

    Within the overall category of perovskites, there are a number of types, including metal oxide perovskites, which have found applications in catalysis and in energy storage and conversion, such as in fuel cells and metal-air batteries. But a main focus of research activity for more than a decade has been on lead halide perovskites, according to Buonassisi says.

    Within that category, there is still a legion of possibilities, and labs around the world are racing through the tedious work of trying to find the variations that show the best performance in efficiency, cost, and durability — which has so far been the most challenging of the three.

    Many teams have also focused on variations that eliminate the use of lead, to avoid its environmental impact. Buonassisi notes, however, that “consistently over time, the lead-based devices continue to improve in their performance, and none of the other compositions got close in terms of electronic performance.” Work continues on exploring alternatives, but for now none can compete with the lead halide versions.

    One of the great advantages perovskites offer is their great tolerance of defects in the structure, he says. Unlike silicon, which requires extremely high purity to function well in electronic devices, perovskites can function well even with numerous imperfections and impurities.

    Searching for promising new candidate compositions for perovskites is a bit like looking for a needle in a haystack, but recently researchers have come up with a machine-learning system that can greatly streamline this process. This new approach could lead to a much faster development of new alternatives, says Buonassisi, who was a co-author of that research.

    While perovskites continue to show great promise, and several companies are already gearing up to begin some commercial production, durability remains the biggest obstacle they face. While silicon solar panels retain up to 90 percent of their power output after 25 years, perovskites degrade much faster. Great progress has been made — initial samples lasted only a few hours, then weeks or months, but newer formulations have usable lifetimes of up to a few years, suitable for some applications where longevity is not essential.

    From a research perspective, Buonassisi says, one advantage of perovskites is that they are relatively easy to make in the lab — the chemical constituents assemble readily. But that’s also their downside: “The material goes together very easily at room temperature,” he says, “but it also comes apart very easily at room temperature. Easy come, easy go!”

    To deal with that issue, most researchers are focused on using various kinds of protective materials to encapsulate the perovskite, protecting it from exposure to air and moisture. But others are studying the exact mechanisms that lead to that degradation, in hopes of finding formulations or treatments that are more inherently robust. A key finding is that a process called autocatalysis is largely to blame for the breakdown.

    In autocatalysis, as soon as one part of the material starts to degrade, its reaction products act as catalysts to start degrading the neighboring parts of the structure, and a runaway reaction gets underway. A similar problem existed in the early research on some other electronic materials, such as organic light-emitting diodes (OLEDs), and was eventually solved by adding additional purification steps to the raw materials, so a similar solution may be found in the case of perovskites, Buonassisi suggests.

    Buonassisi and his co-researchers recently completed a study showing that once perovskites reach a usable lifetime of at least a decade, thanks to their much lower initial cost that would be sufficient to make them economically viable as a substitute for silicon in large, utility-scale solar farms.

    Overall, progress in the development of perovskites has been impressive and encouraging, he says. With just a few years of work, it has already achieved efficiencies comparable to levels that cadmium telluride (CdTe), “which has been around for much longer, is still struggling to achieve,” he says. “The ease with which these higher performances are reached in this new material are almost stupefying.” Comparing the amount of research time spent to achieve a 1 percent improvement in efficiency, he says, the progress on perovskites has been somewhere between 100 and 1000 times faster than that on CdTe. “That’s one of the reasons it’s so exciting,” he says. More

  • in

    MIT engineers design surfaces that make water boil more efficiently

    The boiling of water or other fluids is an energy-intensive step at the heart of a wide range of industrial processes, including most electricity generating plants, many chemical production systems, and even cooling systems for electronics.

    Improving the efficiency of systems that heat and evaporate water could significantly reduce their energy use. Now, researchers at MIT have found a way to do just that, with a specially tailored surface treatment for the materials used in these systems.

    The improved efficiency comes from a combination of three different kinds of surface modifications, at different size scales. The new findings are described in the journal Advanced Materials in a paper by recent MIT graduate Youngsup Song PhD ’21, Ford Professor of Engineering Evelyn Wang, and four others at MIT. The researchers note that this initial finding is still at a laboratory scale, and more work is needed to develop a practical, industrial-scale process.

    There are two key parameters that describe the boiling process: the heat transfer coefficient (HTC) and the critical heat flux (CHF). In materials design, there’s generally a tradeoff between the two, so anything that improves one of these parameters tends to make the other worse. But both are important for the efficiency of the system, and now, after years of work, the team has achieved a way of significantly improving both properties at the same time, through their combination of different textures added to a material’s surface.

    “Both parameters are important,” Song says, “but enhancing both parameters together is kind of tricky because they have intrinsic trade off.” The reason for that, he explains, is “because if we have lots of bubbles on the boiling surface, that means boiling is very efficient, but if we have too many bubbles on the surface, they can coalesce together, which can form a vapor film over the boiling surface.” That film introduces resistance to the heat transfer from the hot surface to the water. “If we have vapor in between the surface and water, that prevents the heat transfer efficiency and lowers the CHF value,” he says.

    Song, who is now a postdoc at Lawrence Berkeley National Laboratory, carried out much of the research as part of his doctoral thesis work at MIT. While the various components of the new surface treatment he developed had been previously studied, the researchers say this work is the first to show that these methods could be combined to overcome the tradeoff between the two competing parameters.

    Adding a series of microscale cavities, or dents, to a surface is a way of controlling the way bubbles form on that surface, keeping them effectively pinned to the locations of the dents and preventing them from spreading out into a heat-resisting film. In this work, the researchers created an array of 10-micrometer-wide dents separated by about 2 millimeters to prevent film formation. But that separation also reduces the concentration of bubbles at the surface, which can reduce the boiling efficiency. To compensate for that, the team introduced a much smaller-scale surface treatment, creating tiny bumps and ridges at the nanometer scale, which increases the surface area and promotes the rate of evaporation under the bubbles.

    In these experiments, the cavities were made in the centers of a series of pillars on the material’s surface. These pillars, combined with nanostructures, promote wicking of liquid from the base to their tops, and this enhances the boiling process by providing more surface area exposed to the water. In combination, the three “tiers” of the surface texture — the cavity separation, the posts, and the nanoscale texturing — provide a greatly enhanced efficiency for the boiling process, Song says.

    “Those micro cavities define the position where bubbles come up,” he says. “But by separating those cavities by 2 millimeters, we separate the bubbles and minimize the coalescence of bubbles.” At the same time, the nanostructures promote evaporation under the bubbles, and the capillary action induced by the pillars supplies liquid to the bubble base. That maintains a layer of liquid water between the boiling surface and the bubbles of vapor, which enhances the maximum heat flux.

    Although their work has confirmed that the combination of these kinds of surface treatments can work and achieve the desired effects, this work was done under small-scale laboratory conditions that could not easily be scaled up to practical devices, Wang says. “These kinds of structures we’re making are not meant to be scaled in its current form,” she says, but rather were used to prove that such a system can work. One next step will be to find alternative ways of creating these kinds of surface textures so these methods could more easily be scaled up to practical dimensions.

    “Showing that we can control the surface in this way to get enhancement is a first step,” she says. “Then the next step is to think about more scalable approaches.” For example, though the pillars on the surface in these experiments were created using clean-room methods commonly used to produce semiconductor chips, there are other, less demanding ways of creating such structures, such as electrodeposition. There are also a number of different ways to produce the surface nanostructure textures, some of which may be more easily scalable.

    There may be some significant small-scale applications that could use this process in its present form, such as the thermal management of electronic devices, an area that is becoming more important as semiconductor devices get smaller and managing their heat output becomes ever more important. “There’s definitely a space there where this is really important,” Wang says.

    Even those kinds of applications will take some time to develop because typically thermal management systems for electronics use liquids other than water, known as dielectric liquids. These liquids have different surface tension and other properties than water, so the dimensions of the surface features would have to be adjusted accordingly. Work on these differences is one of the next steps for the ongoing research, Wang says.

    This same multiscale structuring technique could also be applied to different liquids, Song says, by adjusting the dimensions to account for the different properties of the liquids. “Those kinds of details can be changed, and that can be our next step,” he says.

    The team also included Carlos Diaz-Martin, Lenan Zhang, Hyeongyun Cha, and Yajing Zhao, all at MIT. The work was supported by the Advanced Research Projects Agency-Energy (ARPA-E), the Air Force Office of Scientific Research, and the Singapore-MIT Alliance for Research and Technology, and made use of the MIT.nano facilities. More

  • in

    Pursuing progress at the nanoscale

    Last fall, a team of five senior undergraduate nuclear engineering students met once a week for dinners where they took turns cooking and debated how to tackle a particularly daunting challenge set forth in their program’s capstone course, 22.033 (Nuclear Systems Design Project).

    In past semesters, students had free reign to identify any real-world problem that interested them to solve through team-driven prototyping and design. This past fall worked a little differently. The team continued the trend of tackling daunting problems, but instead got an assignment to explore a particular design challenge on MIT’s campus. Rising to the challenge, the team spent the semester seeking a feasible way to introduce a highly coveted technology at MIT.

    Housed inside a big blue dome is the MIT Nuclear Reactor Laboratory (NRL). The reactor is used to conduct a wide range of science experiments, but in recent years, there have been multiple attempts to implement an instrument at the reactor that could probe the structure of materials, molecules, and devices. With this technology, researchers could model the structure of a wide range of materials and complex liquids made of polymers or containing nanoscale inhomogeneities that differ from the larger mass. On campus, researchers for the first time could conduct experiments to better understand the properties and functions of anything placed in front of a neutron beam emanating from the reactor core.

    The impact of this would be immense. If the reactor could be adapted to conduct this advanced technique, known as small-angle neutron scattering (SANS), it would open up a whole new world of research at MIT.

    “It’s essentially using the nuclear reactor as an incredibly high-performance camera that researchers from all over MIT would be very interested in using, including nuclear science and engineering, chemical engineering, biological engineering, and materials science, who currently use this tool at other institutions,” says Zachary Hartwig, Nuclear Systems Design Project professor and the MIT Robert N. Noyce Career Development Professor.

    SANS instruments have been installed at fewer than 20 facilities worldwide, and MIT researchers have previously considered implementing the capability at the reactor to help MIT expand community-wide access to SANS. Last fall, this mission went from long-time campus dream to potential reality as it became the design challenge that Hartwig’s students confronted. Despite having no experience with SANS, the team embraced the challenge, taking the first steps to figure out how to bring this technology to campus.

    “I really loved the idea that what we were doing could have a very real impact,” says Zoe Fisher, Nuclear Systems Design Project team member and now graduate nuclear engineering student.

    Each fall, Hartwig uses the course to introduce students to real-world challenges with strict constraints on solutions, and last fall’s project came with plenty of thorny design questions for students to tackle. First was the size limitation posed by the space available at MIT’s reactor. In SANS facilities around the world, the average length of the instrument is 30 meters, but at NRL, the space available is approximately 7.5 meters. Second, these instruments can cost up to $30 million, which is far outside NRL’s proposed budget of $3 million. That meant not only did students need to design an instrument that would work in a smaller space, but also one that could be built for a tenth of the typical cost.

    “The challenge was not just implementing one of these instruments,” Hartwig says. “It was whether the students could significantly innovate beyond the ‘traditional’ approach to doing SANS to meet the daunting constraints that we have at the MIT Reactor.”

    Because NRL actually wants to pursue this project, the students had to get creative, and their creative potential was precisely why the idea arose to get them involved, says Jacopo Buongiorno, the director of science and technology at NRL and Tokyo Electric Power Company Professor in Nuclear Engineering. “Involvement in real-world projects that answer questions about feasibility and cost of new technology and capabilities is a key element of a successful undergraduate education at MIT,” Buongiorno says.

    Students say it would have been impossible to tackle the problem without the help of co-instructor Boris Khaykovich, a research scientist at NRL who specializes in neutron instrumentation.

    Over the past two decades, Khaykovich has watched as SANS became the most popular technique for analyzing material structure. As the amount of available SANS beam time at the few facilities that exist became more competitive, access declined. Today only the experiments passing the most stringent review get access. What Khaykovich hopes to bring to MIT is improved access to SANS by designing an instrument that will be suitable for a majority of run-of-the-mill experiments, even if it’s not as powerful as state-of-the-art national SANS facilities. Such an instrument can still serve a wider range of researchers who currently have few opportunities to pursue SANS experiments.

    “In the U.S., we don’t have a simple, small, day-to-day SANS instrument,” Khaykovich says.

    With Khaykovich’s help, nuclear engineering undergraduate student Liam Hines says his team was able to go much further with their assessment than they would’ve starting from scratch, with no background in SANS. This project was unlike anything they’d ever been asked of as MIT students, and for students like Hines, who contributed to NRL research his entire time on campus, it was a project that hit close to home. “We were imagining this thing that might be designed at MIT,” Hines says.

    Fisher and Hines were joined by undergraduate nuclear engineering student team members Francisco Arellano, Jovier Jimenez, and Brendan Vaughan. Together, they devised a design that surprised both Khaykovich and Hartwig, identifying creative solutions that overcame all limitations and significantly reduced cost.

    Their team’s final project featured an adaptation of a conical design that was recently experimentally tested in Japan, but not generally used. The conical design allowed them to maximize precision while working within the other constraints, resulting in an instrument design that exceeded Hartwig’s expectations. The students also showed the feasibility of using an alternative type of glass-based low-cost neutron detector to calibrate the scattering data. By avoiding the need for a traditional detector based on helium-3, which is increasingly scarce and exorbitantly expensive, such a detector would dramatically reduce cost and increase availability. Their final presentation indicated the day-to-day SANS instrument could be built at only 4.5 meters long and with an estimated cost less than $1 million.

    Khaykovich credited the students for their enthusiasm, bouncing ideas off each other and exploring as much terrain as possible by interviewing experts who implemented SANS at other facilities. “They showed quite a perseverance and an ability to go deep into a very unfamiliar territory for them,” Khaykovich says.

    Hines says that Hartwig emphasized the importance of fielding expert opinions to more quickly discover optimal solutions. Fisher says that based on their research, if their design is funded, it would make SANS “more accessible to research for the sake of knowledge,” rather than dominated by industry research.

    Hartwig and Khaykovich agreed the students’ final project results showed a baseline of how MIT could pursue SANS technology cheaply, and when NRL proceeds with its own design process, Hartwig says, “The student’s work might actually change the cost of the feasibility of this at MIT in a way that if we hadn’t run the class, we would never have thought about doing.”

    Buongiorno says as they move forward with the project, NRL staff will consult students’ findings.

    “Indeed, the students developed original technical approaches, which are now being further explored by the NRL staff and may ultimately lead to the deployment of this new important capability on the MIT campus,” Buongiorno says.

    Hartwig says it’s a goal of the Nuclear Systems Design Project course to empower students to learn how to lead teams and embrace challenges, so they can be effective leaders advancing novel solutions in research and industry. “I think it helps teach people to be agile, to be flexible, to have confidence that they can actually go off and learn what they don’t know and solve problems they may think are bigger than themselves,” he says.

    It’s common for past classes of Nuclear Systems Design Project students to continue working on ideas beyond the course, and some students have even launched companies from their project research. What’s less common is for Hartwig’s students to actively serve as engineers pointed to a particular campus problem that’s expected to be resolved in the next few years.

    “In this case, they’re actually working on something real,” Hartwig says. “Their ideas are going to very much influence what we hope will be a facility that gets built at the reactor.”

    For students, it was exciting to inform a major instrument proposal that will soon be submitted to federal funding agencies, and for Hines, it became a chance to make his mark at NRL.

    “This is a lab I’ve been contributing to my entire time at MIT, and then through this project, I finished my time at MIT contributing in a much larger sense,” Hines says. More

  • in

    Getting the carbon out of India’s heavy industries

    The world’s third largest carbon emitter after China and the United States, India ranks seventh in a major climate risk index. Unless India, along with the nearly 200 other signatory nations of the Paris Agreement, takes aggressive action to keep global warming well below 2 degrees Celsius relative to preindustrial levels, physical and financial losses from floods, droughts, and cyclones could become more severe than they are today. So, too, could health impacts associated with the hazardous air pollution levels now affecting more than 90 percent of its population.  

    To address both climate and air pollution risks and meet its population’s escalating demand for energy, India will need to dramatically decarbonize its energy system in the coming decades. To that end, its initial Paris Agreement climate policy pledge calls for a reduction in carbon dioxide intensity of GDP by 33-35 percent by 2030 from 2005 levels, and an increase in non-fossil-fuel-based power to about 40 percent of cumulative installed capacity in 2030. At the COP26 international climate change conference, India announced more aggressive targets, including the goal of achieving net-zero emissions by 2070.

    Meeting its climate targets will require emissions reductions in every economic sector, including those where emissions are particularly difficult to abate. In such sectors, which involve energy-intensive industrial processes (production of iron and steel; nonferrous metals such as copper, aluminum, and zinc; cement; and chemicals), decarbonization options are limited and more expensive than in other sectors. Whereas replacing coal and natural gas with solar and wind could lower carbon dioxide emissions in electric power generation and transportation, no easy substitutes can be deployed in many heavy industrial processes that release CO2 into the air as a byproduct.

    However, other methods could be used to lower the emissions associated with these processes, which draw upon roughly 50 percent of India’s natural gas, 25 percent of its coal, and 20 percent of its oil. Evaluating the potential effectiveness of such methods in the next 30 years, a new study in the journal Energy Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change is the first to explicitly explore emissions-reduction pathways for India’s hard-to-abate sectors.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model, the study assesses existing emissions levels in these sectors and projects how much they can be reduced by 2030 and 2050 under different policy scenarios. Aimed at decarbonizing industrial processes, the scenarios include the use of subsidies to increase electricity use, incentives to replace coal with natural gas, measures to improve industrial resource efficiency, policies to put a price on carbon, carbon capture and storage (CCS) technology, and hydrogen in steel production.

    The researchers find that India’s 2030 Paris Agreement pledge may still drive up fossil fuel use and associated greenhouse gas emissions, with projected carbon dioxide emissions from hard-to-abate sectors rising by about 2.6 times from 2020 to 2050. But scenarios that also promote electrification, natural gas support, and resource efficiency in hard-to-abate sectors can lower their CO2 emissions by 15-20 percent.

    While appearing to move the needle in the right direction, those reductions are ultimately canceled out by increased demand for the products that emerge from these sectors. So what’s the best path forward?

    The researchers conclude that only the incentive of carbon pricing or the advance of disruptive technology can move hard-to-abate sector emissions below their current levels. To achieve significant emissions reductions, they maintain, the price of carbon must be high enough to make CCS economically viable. In that case, reductions of 80 percent below current levels could be achieved by 2050.

    “Absent major support from the government, India will be unable to reduce carbon emissions in its hard-to-abate sectors in alignment with its climate targets,” says MIT Joint Program deputy director Sergey Paltsev, the study’s lead author. “A comprehensive government policy could provide robust incentives for the private sector in India and generate favorable conditions for foreign investments and technology advances. We encourage decision-makers to use our findings to design efficient pathways to reduce emissions in those sectors, and thereby help lower India’s climate and air pollution-related health risks.” More