More stories

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    Making hydropower plants more sustainable

    Growing up on a farm in Texas, there was always something for siblings Gia Schneider ’99 and Abe Schneider ’02, SM ’03 to do. But every Saturday at 2 p.m., no matter what, the family would go down to a local creek to fish, build rock dams and rope swings, and enjoy nature.

    Eventually the family began going to a remote river in Colorado each summer. The river forked in two; one side was managed by ranchers who destroyed natural features like beaver dams, while the other side remained untouched. The family noticed the fishing was better on the preserved side, which led Abe to try measuring the health of the two river ecosystems. In high school, he co-authored a study showing there were more beneficial insects in the bed of the river with the beaver dams.

    The experience taught both siblings a lesson that has stuck. Today they are the co-founders of Natel Energy, a company attempting to mimic natural river ecosystems with hydropower systems that are more sustainable than conventional hydro plants.

    “The big takeaway for us, and what we’ve been doing all this time, is thinking of ways that infrastructure can help increase the health of our environment — and beaver dams are a good example of infrastructure that wouldn’t otherwise be there that supports other populations of animals,” Abe says. “It’s a motivator for the idea that hydropower can help improve the environment rather than destroy the environment.”

    Through new, fish-safe turbines and other features designed to mimic natural river conditions, the founders say their plants can bridge the gap between power-plant efficiency and environmental sustainability. By retrofitting existing hydropower plants and developing new projects, the founders believe they can supercharge a hydropower industry that is by far the largest source of renewable electricity in the world but has not grown in energy generation as much as wind and solar in recent years.

    “Hydropower plants are built today with only power output in mind, as opposed to the idea that if we want to unlock growth, we have to solve for both efficiency and river sustainability,” Gia says.

    A life’s mission

    The origins of Natel came not from a single event but from a lifetime of events. Abe and Gia’s father was an inventor and renewable energy enthusiast who designed and built the log cabin they grew up in. With no television, the kids’ preferred entertainment was reading books or being outside. The water in their house was pumped by power generated using a mechanical windmill on the north side of the house.

    “We grew up hanging clothes on a line, and it wasn’t because we were too poor to own a dryer, but because everything about our existence and our use of energy was driven by the idea that we needed to make conscious decisions about sustainability,” Abe says.

    One of the things that fascinated both siblings was hydropower. In high school, Abe recalls bugging his friend who was good at math to help him with designs for new hydro turbines.

    Both siblings admit coming to MIT was a major culture shock, but they loved the atmosphere of problem solving and entrepreneurship that permeated the campus. Gia came to MIT in 1995 and majored in chemical engineering while Abe followed three years later and majored in mechanical engineering for both his bachelor’s and master’s degrees.

    All the while, they never lost sight of hydropower. In the 1998 MIT $100K Entrepreneurship Competitions (which was the $50K at the time), they pitched an idea for hydropower plants based on a linear turbine design. They were named finalists in the competition, but still wanted more industry experience before starting a company. After graduation, Abe worked as a mechanical engineer and did some consulting work with the operators of small hydropower plants while Gia worked at the energy desks of a few large finance companies.

    In 2009, the siblings, along with their late father, Daniel, received a small business grant of $200,000 and formally launched Natel Energy.

    Between 2009 and 2019, the founders worked on a linear turbine design that Abe describes as turbines on a conveyor belt. They patented and deployed the system on a few sites, but the problem of ensuring safe fish passage remained.

    Then the founders were doing some modeling that suggested they could achieve high power plant efficiency using an extremely rounded edge on a turbine blade — as opposed to the sharp blades typically used for hydropower turbines. The insight made them realize if they didn’t need sharp blades, perhaps they didn’t need a complex new turbine.

    “It’s so counterintuitive, but we said maybe we can achieve the same results with a propeller turbine, which is the most common kind,” Abe says. “It started out as a joke — or a challenge — and I did some modeling and rapidly realized, ‘Holy cow, this actually could work!’ Instead of having a powertrain with a decade’s worth of complexity, you have a powertrain that has one moving part, and almost no change in loading, in a form factor that the whole industry is used to.”

    The turbine Natel developed features thick blades that allow more than 99 percent of fish to pass through safely, according to third-party tests. Natel’s turbines also allow for the passage of important river sediment and can be coupled with structures that mimic natural features of rivers like log jams, beaver dams, and rock arches.

    “We want the most efficient machine possible, but we also want the most fish-safe machine possible, and that intersection has led to our unique intellectual property,” Gia says.

    Supercharging hydropower

    Natel has already installed two versions of its latest turbine, what it calls the Restoration Hydro Turbine, at existing plants in Maine and Oregon. The company hopes that by the end of this year, two more will be deployed, including one in Europe, a key market for Natel because of its stronger environmental regulations for hydropower plants.

    Since their installation, the founders say the first two turbines have converted more than 90 percent of the energy available in the water into energy at the turbine, a comparable efficiency to conventional turbines.

    Looking forward, Natel believes its systems have a significant role to play in boosting the hydropower industry, which is facing increasing scrutiny and environmental regulation that could otherwise close down many existing plants. For example, the founders say that hydropower plants the company could potentially retrofit across the U.S. and Europe have a total capacity of about 30 gigawatts, enough to power millions of homes.

    Natel also has ambitions to build entirely new plants on the many nonpowered dams around the U.S. and Europe. (Currently only 3 percent of the United States’ 80,000 dams are powered.) The founders estimate their systems could generate about 48 gigawatts of new electricity across the U.S. and Europe — the equivalent of more than 100 million solar panels.

    “We’re looking at numbers that are pretty meaningful,” Gia says. “We could substantially add to the existing installed base while also modernizing the existing base to continue to be productive while meeting modern environmental requirements.”

    Overall, the founders see hydropower as a key technology in our transition to sustainable energy, a sentiment echoed by recent MIT research.

    “Hydro today supplies the bulk of electricity reliability services in a lot of these areas — things like voltage regulation, frequency regulation, storage,” Gia says. “That’s key to understand: As we transition to a zero-carbon grid, we need a reliable grid, and hydro has a very important role in supporting that. Particularly as we think about making this transition as quickly as we can, we’re going to need every bit of zero-emission resources we can get.” More

  • in

    A better way to quantify radiation damage in materials

    It was just a piece of junk sitting in the back of a lab at the MIT Nuclear Reactor facility, ready to be disposed of. But it became the key to demonstrating a more comprehensive way of detecting atomic-level structural damage in materials — an approach that will aid the development of new materials, and could potentially support the ongoing operation of carbon-emission-free nuclear power plants, which would help alleviate global climate change.

    A tiny titanium nut that had been removed from inside the reactor was just the kind of material needed to prove that this new technique, developed at MIT and at other institutions, provides a way to probe defects created inside materials, including those that have been exposed to radiation, with five times greater sensitivity than existing methods.

    The new approach revealed that much of the damage that takes place inside reactors is at the atomic scale, and as a result is difficult to detect using existing methods. The technique provides a way to directly measure this damage through the way it changes with temperature. And it could be used to measure samples from the currently operating fleet of nuclear reactors, potentially enabling the continued safe operation of plants far beyond their presently licensed lifetimes.

    The findings are reported today in the journal Science Advances in a paper by MIT research specialist and recent graduate Charles Hirst PhD ’22; MIT professors Michael Short, Scott Kemp, and Ju Li; and five others at the University of Helsinki, the Idaho National Laboratory, and the University of California at Irvine.

    Rather than directly observing the physical structure of a material in question, the new approach looks at the amount of energy stored within that structure. Any disruption to the orderly structure of atoms within the material, such as that caused by radiation exposure or by mechanical stresses, actually imparts excess energy to the material. By observing and quantifying that energy difference, it’s possible to calculate the total amount of damage within the material — even if that damage is in the form of atomic-scale defects that are too small to be imaged with microscopes or other detection methods.

    The principle behind this method had been worked out in detail through calculations and simulations. But it was the actual tests on that one titanium nut from the MIT nuclear reactor that provided the proof — and thus opened the door to a new way of measuring damage in materials.

    The method they used is called differential scanning calorimetry. As Hirst explains, this is similar in principle to the calorimetry experiments many students carry out in high school chemistry classes, where they measure how much energy it takes to raise the temperature of a gram of water by one degree. The system the researchers used was “fundamentally the exact same thing, measuring energetic changes. … I like to call it just a fancy furnace with a thermocouple inside.”

    The scanning part has to do with gradually raising the temperature a bit at a time and seeing how the sample responds, and the differential part refers to the fact that two identical chambers are measured at once, one empty, and one containing the sample being studied. The difference between the two reveals details of the energy of the sample, Hirst explains.

    “We raise the temperature from room temperature up to 600 degrees Celsius, at a constant rate of 50 degrees per minute,” he says. Compared to the empty vessel, “your material will naturally lag behind because you need energy to heat your material. But if there are changes in the energy inside the material, that will change the temperature. In our case, there was an energy release when the defects recombine, and then it will get a little bit of a head start on the furnace … and that’s how we are measuring the energy in our sample.”

    Hirst, who carried out the work over a five-year span as his doctoral thesis project, found that contrary to what had been believed, the irradiated material showed that there were two different mechanisms involved in the relaxation of defects in titanium at the studied temperatures, revealed by two separate peaks in calorimetry. “Instead of one process occurring, we clearly saw two, and each of them corresponds to a different reaction that’s happening in the material,” he says.

    They also found that textbook explanations of how radiation damage behaves with temperature weren’t accurate, because previous tests had mostly been carried out at extremely low temperatures and then extrapolated to the higher temperatures of real-life reactor operations. “People weren’t necessarily aware that they were extrapolating, even though they were, completely,” Hirst says.

    “The fact is that our common-knowledge basis for how radiation damage evolves is based on extremely low-temperature electron radiation,” adds Short. “It just became the accepted model, and that’s what’s taught in all the books. It took us a while to realize that our general understanding was based on a very specific condition, designed to elucidate science, but generally not applicable to conditions in which we actually want to use these materials.”

    Now, the new method can be applied “to materials plucked from existing reactors, to learn more about how they are degrading with operation,” Hirst says.

    “The single biggest thing the world can do in order to get cheap, carbon-free power is to keep current reactors on the grid. They’re already paid for, they’re working,” Short adds.  But to make that possible, “the only way we can keep them on the grid is to have more certainty that they will continue to work well.” And that’s where this new way of assessing damage comes into play.

    While most nuclear power plants have been licensed for 40 to 60 years of operation, “we’re now talking about running those same assets out to 100 years, and that depends almost fully on the materials being able to withstand the most severe accidents,” Short says. Using this new method, “we can inspect them and take them out before something unexpected happens.”

    In practice, plant operators could remove a tiny sample of material from critical areas of the reactor, and analyze it to get a more complete picture of the condition of the overall reactor. Keeping existing reactors running is “the single biggest thing we can do to keep the share of carbon-free power high,” Short stresses. “This is one way we think we can do that.”

    Sergei Dudarev, a fellow at the United Kingdom Atomic Energy Authority who was not associated with this work, says this “is likely going to be impactful, as it confirms, in a nice systematic manner, supported both by experiment and simulations, the unexpectedly significant part played by the small invisible defects in microstructural evolution of materials exposed to irradiation.”

    The process is not just limited to the study of metals, nor is it limited to damage caused by radiation, the researchers say. In principle, the method could be used to measure other kinds of defects in materials, such as those caused by stresses or shockwaves, and it could be applied to materials such as ceramics or semiconductors as well.

    In fact, Short says, metals are the most difficult materials to measure with this method, and early on other researchers kept asking why this team was focused on damage to metals. That was partly because reactor components tend to be made of metal, and also because “It’s the hardest, so, if we crack this problem, we have a tool to crack them all!”

    Measuring defects in other kinds of materials can be up to 10,000 times easier than in metals, he says. “If we can do this with metals, we can make this extremely, ubiquitously applicable.” And all of it enabled by a small piece of junk that was sitting at the back of a lab.

    The research team included Fredric Granberg and Kai Nordlund at the University of Helsinki in Finland; Boopathy Kombaiah and Scott Middlemas at Idaho National Laboratory; and Penghui Cao at the University of California at Irvine. The work was supported by the U.S. National Science Foundation, an Idaho National Laboratory research grant, and a Euratom Research and Training program grant. More

  • in

    New hardware offers faster computation for artificial intelligence, with much less energy

    As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

    Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

    A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

    Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

    “With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

    “The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

    “The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

    These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

    “Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

    Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

    Accelerating deep learning

    Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

    The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

    In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

    The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

    To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

    PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

    Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

    Surprising speed

    PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

    “The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

    “The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

    Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

    Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

    Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

    At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

    “Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

    “The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

    “Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

    “This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

    This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Silk offers an alternative to some microplastics

    Microplastics, tiny particles of plastic that are now found worldwide in the air, water, and soil, are increasingly recognized as a serious pollution threat, and have been found in the bloodstream of animals and people around the world.

    Some of these microplastics are intentionally added to a variety of products, including agricultural chemicals, paints, cosmetics, and detergents — amounting to an estimated 50,000 tons a year in the European Union alone, according to the European Chemicals Agency. The EU has already declared that these added, nonbiodegradable microplastics must be eliminated by 2025, so the search is on for suitable replacements, which do not currently exist.

    Now, a team of scientists at MIT and elsewhere has developed a system based on silk that could provide an inexpensive and easily manufactured substitute. The new process is described in a paper in the journal Small, written by MIT postdoc Muchun Liu, MIT professor of civil and environmental engineering Benedetto Marelli, and five others at the chemical company BASF in Germany and the U.S.

    The microplastics widely used in industrial products generally protect some specific active ingredient (or ingredients) from being degraded by exposure to air or moisture, until the time they are needed. They provide a slow release of the active ingredient for a targeted period of time and minimize adverse effects to its surroundings. For example, vitamins are often delivered in the form of microcapsules packed into a pill or capsule, and pesticides and herbicides are similarly enveloped. But the materials used today for such microencapsulation are plastics that persist in the environment for a long time. Until now, there has been no practical, economical substitute available that would biodegrade naturally.

    Much of the burden of environmental microplastics comes from other sources, such as the degradation over time of larger plastic objects such as bottles and packaging, and from the wear of car tires. Each of these sources may require its own kind of solutions for reducing its spread, Marelli says. The European Chemical Agency has estimated that the intentionally added microplastics represent approximately 10-15 percent of the total amount in the environment, but this source may be relatively easy to address using this nature-based biodegradable replacement, he says.

    “We cannot solve the whole microplastics problem with one solution that fits them all,” he says. “Ten percent of a big number is still a big number. … We’ll solve climate change and pollution of the world one percent at a time.”

    Unlike the high-quality silk threads used for fine fabrics, the silk protein used in the new alternative material is widely available and less expensive, Liu says. While silkworm cocoons must be painstakingly unwound to produce the fine threads needed for fabric, for this use, non-textile-quality cocoons can be used, and the silk fibers can simply be dissolved using a scalable water-based process. The processing is so simple and tunable that the resulting material can be adapted to work on existing manufacturing equipment, potentially providing a simple “drop in” solution using existing factories.

    Silk is recognized as safe for food or medical use, as it is nontoxic and degrades naturally in the body. In lab tests, the researchers demonstrated that the silk-based coating material could be used in existing, standard spray-based manufacturing equipment to make a standard water-soluble microencapsulated herbicide product, which was then tested in a greenhouse on a corn crop. The test showed it worked even better than an existing commercial product, inflicting less damage to the plants, Liu says.

    While other groups have proposed degradable encapsulation materials that may work at a small laboratory scale, Marelli says, “there is a strong need to achieve encapsulation of high-content actives to open the door to commercial use. The only way to have an impact is where we can not only replace a synthetic polymer with a biodegradable counterpart, but also achieve performance that is the same, if not better.”

    The secret to making the material compatible with existing equipment, Liu explains, is in the tunability of the silk material. By precisely adjusting the polymer chain arrangements of silk materials and addition of a surfactant, it is possible to fine-tune the properties of the resulting coatings once they dry out and harden. The material can be hydrophobic (water-repelling) even though it is made and processed in a water solution, or it can be hydrophilic (water-attracting), or anywhere in between, and for a given application it can be made to match the characteristics of the material it is being used to replace.

    In order to arrive at a practical solution, Liu had to develop a way of freezing the forming droplets of encapsulated materials as they were forming, to study the formation process in detail. She did this using a special spray-freezing system, and was able to observe exactly how the encapsulation works in order to control it better. Some of the encapsulated “payload” materials, whether they be pesticides or nutrients or enzymes, are water-soluble and some are not, and they interact in different ways with the coating material.

    “To encapsulate different materials, we have to study how the polymer chains interact and whether they are compatible with different active materials in suspension,” she says. The payload material and the coating material are mixed together in a solution and then sprayed. As droplets form, the payload tends to be embedded in a shell of the coating material, whether that’s the original synthetic plastic or the new silk material.

    The new method can make use of low-grade silk that is unusable for fabrics, and large quantities of which are currently discarded because they have no significant uses, Liu says. It can also use used, discarded silk fabric, diverting that material from being disposed of in landfills.

    Currently, 90 percent of the world’s silk production takes place in China, Marelli says, but that’s largely because China has perfected the production of the high-quality silk threads needed for fabrics. But because this process uses bulk silk and has no need for that level of quality, production could easily be ramped up in other parts of the world to meet local demand if this process becomes widely used, he says.

    “This elegant and clever study describes a sustainable and biodegradable silk-based replacement for microplastic encapsulants, which are a pressing environmental challenge,” says Alon Gorodetsky, an associate professor of chemical and biomolecular engineering at the University of California at Irvine, who was not associated with this research. “The modularity of the described materials and the scalability of the manufacturing processes are key advantages that portend well for translation to real-world applications.”

    This process “represents a potentially highly significant advance in active ingredient delivery for a range of industries, particularly agriculture,” says Jason White, director of the Connecticut Agricultural Experiment Station, who also was not associated with this work. “Given the current and future challenges related to food insecurity, agricultural production, and a changing climate, novel strategies such as this are greatly needed.”

    The research team also included Pierre-Eric Millard, Ophelie Zeyons, Henning Urch, Douglas Findley and Rupert Konradi from the BASF corporation, in Germany and in the U.S. The work was supported by BASF through the Northeast Research Alliance (NORA). More

  • in

    Fusion’s newest ambassador

    When high school senior Tuba Balta emailed MIT Plasma Science and Fusion Center (PSFC) Director Dennis Whyte in February, she was not certain she would get a response. As part of her final semester at BASIS Charter School, in Washington, she had been searching unsuccessfully for someone to sponsor an internship in fusion energy, a topic that had recently begun to fascinate her because “it’s not figured out yet.” Time was running out if she was to include the internship as part of her senior project.

    “I never say ‘no’ to a student,” says Whyte, who felt she could provide a youthful perspective on communicating the science of fusion to the general public.

    Posters explaining the basics of fusion science were being considered for the walls of a PSFC lounge area, a space used to welcome visitors who might not know much about the center’s focus: What is fusion? What is plasma? What is magnetic confinement fusion? What is a tokamak?

    Why couldn’t Balta be tasked with coming up with text for these posters, written specifically to be understandable, even intriguing, to her peers?

    Meeting the team

    Although most of the internship would be virtual, Balta visited MIT to meet Whyte and others who would guide her progress. A tour of the center showed her the past and future of the PSFC, one lab area revealing on her left the remains of the decades-long Alcator C-Mod tokamak and on her right the testing area for new superconducting magnets crucial to SPARC, designed in collaboration with MIT spinoff Commonwealth Fusion Systems.

    With Whyte, graduate student Rachel Bielajew, and Outreach Coordinator Paul Rivenberg guiding her content and style, Balta focused on one of eight posters each week. Her school also required her to keep a weekly blog of her progress, detailing what she was learning in the process of creating the posters.

    Finding her voice

    Balta admits that she was not looking forward to this part of the school assignment. But she decided to have fun with it, adopting an enthusiastic and conversational tone, as if she were sitting with friends around a lunch table. Each week, she was able to work out what she was composing for her posters and her final project by trying it out on her friends in the blog.

    Her posts won praise from her schoolmates for their clarity, as when in Week 3 she explained the concept of turbulence as it relates to fusion research, sending her readers to their kitchen faucets to experiment with the pressure and velocity of running tap water.

    The voice she found through her blog served her well during her final presentation about fusion at a school expo for classmates, parents, and the general public.

    “Most people are intimidated by the topic, which they shouldn’t be,” says Balta. “And it just made me happy to help other people understand it.”

    Her favorite part of the internship? “Getting to talk to people whose papers I was reading and ask them questions. Because when it comes to fusion, you can’t just look it up on Google.”

    Awaiting her first year at the University of Chicago, Balta reflects on the team spirit she experienced in communicating with researchers at the PSFC.

    “I think that was one of my big takeaways,” she says, “that you have to work together. And you should, because you’re always going to be missing some piece of information; but there’s always going to be somebody else who has that piece, and we can all help each other out.” More

  • in

    Four researchers with MIT ties earn Schmidt Science Fellowships

    Four researchers with MIT ties — Juncal Arbelaiz, Xiangkun (Elvis) Cao, Sandya Subramanian, and Heather Zlotnick ’17 — have been honored with competitive Schmidt Science Fellowships.

    Created in 2017, the fellows program aims to bring together the world’s brightest minds “to solve society’s toughest challenges.”

    The four MIT-affiliated researchers are among 29 Schmidt Science Fellows from around the world who will receive postdoctoral support for either one or two years with an annual stipend of $100,000, along with individualized mentoring and participation in the program’s Global Meeting Series. The fellows will also have opportunities to engage with thought-leaders from science, business, policy, and society. According to the award announcement, the fellows are expected to pursue research that shifts from the focus of their PhDs, to help expand and enhance their futures as scientific leaders.

    Juncal Arbelaiz is a PhD candidate in applied mathematics at MIT, who is completing her doctorate this summer. Her doctoral research at MIT is advised by Ali Jadbabaie, the JR East Professor of Engineering and head of the Department of Civil and Environmental Engineering; Anette Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of the School of Engineering; and Bassam Bamieh, professor of mechanical engineering and associate director of the Center for Control, Dynamical Systems, and Computation at the University of California at Santa Barbara. Arbelaiz’s research revolves around the design of optimal decentralized intelligence for spatially-distributed dynamical systems.

    “I cannot think of a better way to start my independent scientific career. I feel very excited and grateful for this opportunity,” says Arbelaiz. With her fellowship, she will enlist systems biology to explore how the nervous system encodes and processes sensory information to address future safety-critical artificial intelligence applications. “The Schmidt Science Fellowship will provide me with a unique opportunity to work at the intersection of biological and machine intelligence for two years and will be a steppingstone towards my longer-term objective of becoming a researcher in bio-inspired machine intelligence,” she says.

    Xiangkun (Elvis) Cao is currently a postdoc in the lab of T. Alan Hatton, the Ralph Landau Professor in Chemical Engineering, and an Impact Fellow at the MIT Climate and Sustainability Consortium. Cao received his PhD in mechanical engineering from Cornell University in 2021, during which he focused on microscopic precision in the simultaneous delivery of light and fluids by optofluidics, with advances relevant to health and sustainability applications. As a Schmidt Science Fellow, he plans to be co-advised by Hatton on carbon capture, and Ted Sargent, professor of chemistry at Northwestern University, on carbon utilization. Cao is passionate about integrated carbon capture and utilization (CCU) from molecular to process levels, machine learning to inspire smart CCU, and the nexus of technology, business, and policy for CCU.

    “The Schmidt Science Fellowship provides the perfect opportunity for me to work across disciplines to study integrated carbon capture and utilization from molecular to process levels,” Cao explains. “My vision is that by integrating carbon capture and utilization, we can concurrently make scientific discoveries and unlock economic opportunities while mitigating global climate change. This way, we can turn our carbon liability into an asset.”

    Sandya Subramanian, a 2021 PhD graduate of the Harvard-MIT Program in Health Sciences and Technology (HST) in the area of medical engineering and medical physics, is currently a postdoc at Stanford Data Science. She is focused on the topics of biomedical engineering, statistics, machine learning, neuroscience, and health care. Her research is on developing new technologies and methods to study the interactions between the brain, the autonomic nervous system, and the gut. “I’m extremely honored to receive the Schmidt Science Fellowship and to join the Schmidt community of leaders and scholars,” says Subramanian. “I’ve heard so much about the fellowship and the fact that it can open doors and give people confidence to pursue challenging or unique paths.”

    According to Subramanian, the autonomic nervous system and its interactions with other body systems are poorly understood but thought to be involved in several disorders, such as functional gastrointestinal disorders, Parkinson’s disease, diabetes, migraines, and eating disorders. The goal of her research is to improve our ability to monitor and quantify these physiologic processes. “I’m really interested in understanding how we can use physiological monitoring technologies to inform clinical decision-making, especially around the autonomic nervous system, and I look forward to continuing the work that I’ve recently started at Stanford as Schmidt Science Fellow,” she says. “A huge thank you to all of the mentors, colleagues, friends, and leaders I had the pleasure of meeting and working with at HST and MIT; I couldn’t have done this without everything I learned there.”

    Hannah Zlotnick ’17 attended MIT for her undergraduate studies, majoring in biological engineering with a minor in mechanical engineering. At MIT, Zlotnick was a student-athlete on the women’s varsity soccer team, a UROP student in Alan Grodzinsky’s laboratory, and a member of Pi Beta Phi. For her PhD, Zlotnick attended the University of Pennsylvania, and worked in Robert Mauck’s laboratory within the departments of Bioengineering and Orthopaedic Surgery.

    Zlotnick’s PhD research focused on harnessing remote forces, such as magnetism or gravity, to enhance engineered cartilage and osteochondral repair both in vitro and in large animal models. Zlotnick now plans to pivot to the field of biofabrication to create tissue models of the knee joint to assess potential therapeutics for osteoarthritis. “I am humbled to be a part of the Schmidt Science Fellows community, and excited to venture into the field of biofabrication,” Zlotnick says. “Hopefully this work uncovers new therapies for patients with inflammatory joint diseases.” More