More stories

  • in

    The boiling crisis — and how to avoid it

    It’s rare for a pre-teen to become enamored with thermodynamics, but those consumed by such a passion may consider themselves lucky to end up at a place like MIT. Madhumitha Ravichandran certainly does. A PhD student in Nuclear Science and Engineering (NSE), Ravichandran first encountered the laws of thermodynamics as a middle school student in Chennai, India. “They made complete sense to me,” she says. “While looking at the refrigerator at home, I wondered if I might someday build energy systems that utilized these same principles. That’s how it started, and I’ve sustained that interest ever since.”

    She’s now drawing on her knowledge of thermodynamics in research carried out in the laboratory of NSE Assistant Professor Matteo Bucci, her doctoral supervisor. Ravichandran and Bucci are gaining key insights into the “boiling crisis” — a problem that has long saddled the energy industry.

    Ravichandran was well prepared for this work by the time she arrived at MIT in 2017. As an undergraduate at India’s Sastra University, she pursued research on “two-phase flows,” examining the transitions water undergoes between its liquid and gaseous forms. She continued to study droplet evaporation and related phenomena during an internship in early 2017 in the Bucci Lab. That was an eye-opening experience, Ravichandran explains. “Back at my university in India, only 2 to 3 percent of the mechanical engineering students were women, and there were no women on the faculty. It was the first time I had faced social inequities because of my gender, and I went through some struggles, to say the least.”

    MIT offered a welcome contrast. “The amount of freedom I was given made me extremely happy,” she says. “I was always encouraged to explore my ideas, and I always felt included.” She was doubly happy because, midway through the internship, she learned that she’d been accepted to MIT’s graduate program.

    As a PhD student, her research has followed a similar path. She continues to study boiling and heat transfer, but Bucci gave this work some added urgency. They’re now investigating the aforementioned boiling crisis, which affects nuclear reactors and other kinds of power plants that rely on steam generation to drive turbines. In a light water nuclear reactor, water is heated by fuel rods in which nuclear fission has occurred. Heat removal is most efficient when the water circulating past the rods boils. However, if too many bubbles form on the surface, enveloping the fuel rods in a layer of vapor, heat transfer is greatly reduced. That’s not only diminishes power generation, it can also be dangerous because the fuel rods must be continuously cooled to avoid a dreaded meltdown accident.

    Nuclear plants operate at low power ratings to provide an ample safety margin and thereby prevent such a scenario from occurring. Ravichandran believes these standards may be overly cautious, owing to the fact that people aren’t yet sure of the conditions that bring about the boiling crisis. This hurts the economic viability of nuclear power, she says, at a time when we desperately need carbon-free power sources. But Ravichandran and other researchers in the Bucci Lab are starting to fill some major gaps in our understanding.

    They initially ran experiments to determine how quickly bubbles form when water hits a hot surface, how big the bubbles get, how long they grow, and how the surface temperature changes. “A typical experiment lasted two minutes, but it took more than three weeks to pick out every bubble that formed and track its growth and evolution,” Ravichandran explains.

    To streamline this process, she and Bucci are implementing a machine learning approach, based on neural network technology. Neural networks are good at recognizing patterns, including those associated with bubble nucleation. “These networks are data hungry,” Ravichandran says. “The more data they’re fed, the better they perform.” The networks were trained on experimental results pertaining to bubble formation on different surfaces; the networks were then tested on surfaces for which the NSE researchers had no data and didn’t know what to expect.

    After gaining experimental validation of the output from the machine learning models, the team is now trying to get these models to make reliable predictions as to when the bubble crisis, itself, will occur. The ultimate goal is to have a fully autonomous system that can not only predict the boiling crisis, but also show why it happens and automatically shut down experiments before things go too far and lab equipment starts melting.

    In the meantime, Ravichandran and Bucci have made some important theoretical advances, which they report on in a recently published paper for Applied Physics Letters. There had been a debate in the nuclear engineering community as to whether the boiling crisis is caused by bubbles covering the fuel rod surface or due to bubbles growing on top of each other, extending outward from the surface. Ravichandran and Bucci determined that it is a surface-level phenomenon. In addition, they’ve identified the three main factors that trigger the boiling crisis. First, there’s the number of bubbles that form over a given surface area and, second, the average bubble size. The third factor is the product of the bubble frequency (the number of bubbles forming within a second at a given site) and the time it takes for a bubble to reach its full size.

    Ravichandran is happy to have shed some new light on this issue but acknowledges that there’s still much work to be done. Although her research agenda is ambitious and nearly all consuming, she never forgets where she came from and the sense of isolation she felt while studying engineering as an undergraduate. She has, on her own initiative, been mentoring female engineering students in India, providing both research guidance and career advice.

    “I sometimes feel there was a reason I went through those early hardships,” Ravichandran says. “That’s what made me decide that I want to be an educator.” She’s also grateful for the opportunities that have opened up for her since coming to MIT. A recipient of a 2021-22 MathWorks Engineering Fellowship, she says, “now it feels like the only limits on me are those that I’ve placed on myself.” More

  • in

    Climate and sustainability classes expand at MIT

    In fall 2019, a new class, 6.S898/12.S992 (Climate Change Seminar), arrived at MIT. It was, at the time, the only course in the Department of Electrical Engineering and Computer Science (EECS) to tackle the science of climate change. The class covered climate models and simulations alongside atmospheric science, policy, and economics.

    Ron Rivest, MIT Institute Professor of Computer Science, was one of the class’s three instructors, with Alan Edelman of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and John Fernández of the Department of Urban Studies and Planning. “Computer scientists have much to contribute to climate science,” Rivest says. “In particular, the modeling and simulation of climate can benefit from advances in computer science.”

    Rivest is one of many MIT faculty members who have been working in recent years to bring topics in climate, sustainability, and the environment to students in a growing variety of fields. And students have said they want this trend to continue.

    “Sustainability is something that touches all disciplines,” says Megan Xu, a rising senior in biological engineering and advisory chair of the Undergraduate Association Sustainability Committee. “As students who have grown up knowing that climate change is real and witnessed climate disaster after disaster, we know this is a huge problem that needs to be addressed by our generation.”

    Expanding the course catalog

    As education program manager at the MIT Environmental Solutions Initiative, Sarah Meyers has repeatedly had a hand in launching new sustainability classes. She has steered grant money to faculty, brought together instructors, and helped design syllabi — all in the service of giving MIT students the same world-class education in climate and sustainability that they get in science and engineering.

    Her work has given Meyers a bird’s-eye view of MIT’s course offerings in this area. By her count, there are now over 120 undergraduate classes, across 23 academic departments, that teach climate, environment, and sustainability principles.

    “Educating the next generation is the most important way that MIT can have an impact on the world’s environmental challenges,” she says. “MIT students are going to be leaders in their fields, whatever they may be. If they really understand sustainable design practices, if they can balance the needs of all stakeholders to make ethical decisions, then that actually changes the way our world operates and can move humanity towards a more sustainable future.”

    Some sustainability classes are established institutions at MIT. Success stories include 2.00A (Fundamentals of Engineering Design: Explore Space, Sea and Earth), a hands-on engineering class popular with first-year students; and 21W.775 (Writing About Nature and Environmental Issues), which has helped undergraduates fulfill their HASS-H (humanities distribution subject) and CI-H (Communication Intensive subject in the Humanities, Arts, and Social Sciences) graduation requirements for 15 years.

    Expanding this list of classes is an institutional priority. In the recently released Climate Action Plan for the Decade, MIT pledged to recruit at least 20 additional faculty members who will teach climate-related classes.

    “I think it’s easy to find classes if you’re looking for sustainability classes to take,” says Naomi Lutz, a senior in mechanical engineering who helped advise the MIT administration on education measures in the Climate Action Plan. “I usually scroll through the titles of the classes in courses 1, 2, 11, and 12 to see if any are of interest. I also have used the Environment & Sustainability Minor class list to look for sustainability-related classes to take.

    “The coming years are critical for the future of our planet, so it’s important that we all learn about sustainability and think about how to address it,” she adds.

    Working with students’ schedules

    Still, despite all this activity, climate and sustainability are not yet mainstream parts of an MIT education. Last year, a survey of over 800 MIT undergraduates, taken by the Undergraduate Association Sustainability Committee, found that only one in four had ever taken a class related to sustainability. But it doesn’t seem to be from lack of interest in the topic. More than half of those surveyed said that sustainability is a factor in their career planning, and almost 80 percent try to practice sustainability in their daily lives.

    “I’ve often had conversations with students who were surprised to learn there are so many classes available,” says Meyers. “We do need to do a better job communicating about them, and making it as easy as possible to enroll.”

    A recurring challenge is helping students fit sustainability into their plans for graduation, which are often tightly mapped-out.

    “We each only have four years — around 32 to 40 classes — to absorb all that we can from this amazing place,” says Xu. “Many of these classes are mandated to be GIRs [General Institute Requirements] and major requirements. Many students recognize that sustainability is important, but might not have the time to devote an entire class to the topic if it would not count toward their requirements.”

    This was a central focus for the students who were involved in forming education recommendations for the Climate Action Plan. “We propose that more sustainability-related courses or tracks are offered in the most common majors, especially in Course 6 [EECS],” says Lutz. “If students can fulfill major requirements while taking courses that address environmental problems, we believe more students will pursue research and careers related to sustainability.”

    She also recommends that students look into the dozens of climate and sustainability classes that fulfill GIRs. “It’s really easy to take sustainability-related courses that fulfill HASS [Humanities, Arts, and Social Sciences] requirements,” she says. For example, students can meet their HASS-S (social sciences sistribution subject) requirement by taking 21H.185 (Environment and History), or fulfill their HASS-A requirement with CMS.374 (Transmedia Art, Extraction and Environmental Justice).

    Classes with impact

    For those students who do seek out sustainability classes early in their MIT careers, the experience can shape their whole education.

    “My first semester at MIT, I took Environment and History, co-taught by professors Susan Solomon and Harriet Ritvo,” says Xu. “It taught me that there is so much more involved than just science and hard facts to solving problems in sustainability and climate. I learned to look at problems with more of a focus on people, which has informed much of the extracurricular work that I’ve gone on to do at MIT.”

    And the faculty, too, sometimes find that teaching in this area opens new doors for them. Rivest, who taught the climate change seminar in Course 6, is now working to build a simplified climate model with his co-instructor Alan Edelman, their teaching assistant Henri Drake, and Professor John Deutch of the Department of Chemistry, who joined the class as a guest lecturer. “I very much enjoyed meeting new colleagues from all around MIT,” Rivest says. “Teaching a class like this fosters connections between computer scientists and climate scientists.”

    Which is why Meyers will continue helping to get these classes off the ground. “We know students think climate is a huge issue for their futures. We know faculty agree with them,” she says. “Everybody wants this to be part of an MIT education. The next step is to really reach out to students and departments to fill the classrooms. That’s the start of a virtuous cycle where enrollment drives more sustainability instruction in every part of MIT.” More

  • in

    Countering climate change with cool pavements

    Pavements are an abundant urban surface, covering around 40 percent of American cities. But in addition to carrying traffic, they can also emit heat.

    Due to what’s called the urban heat island effect, densely built, impermeable surfaces like pavements can absorb solar radiation and warm up their surroundings by re-emitting that radiation as heat. This phenomenon poses a serious threat to cities. It increases air temperatures by up as much as 7 degrees Fahrenheit and contributes to health and environmental risks — risks that climate change will magnify.

    In response, researchers at the MIT Concrete Sustainability Hub (MIT CSHub) are studying how a surface that ordinarily heightens urban heat islands can instead lessen their intensity. Their research focuses on “cool pavements,” which reflect more solar radiation and emit less heat than conventional paving surfaces.

    A recent study by a team of current and former MIT CSHub researchers in the journal of Environmental Science and Technology outlines cool pavements and their implementation. The study found that they could lower air temperatures in Boston and Phoenix by up to 1.7 degrees Celsius (3 F) and 2.1 C (3.7 F), respectively. They would also reduce greenhouse gas emissions, cutting total emissions by up to 3 percent in Boston and 6 percent in Phoenix. Achieving these savings, however, requires that cool pavement strategies be selected according to the climate, traffic, and building configurations of each neighborhood.

    Cities like Los Angeles and Phoenix have already conducted sizeable experiments with cool pavements, but the technology is still not widely implemented. The CSHub team hopes their research can guide future cool paving projects to help cities cope with a changing climate.

    Scratching the surface

    It’s well known that darker surfaces get hotter in sunlight than lighter ones. Climate scientists use a metric called “albedo” to help describe this phenomenon.

    “Albedo is a measure of surface reflectivity,” explains Hessam AzariJafari, the paper’s lead author and a postdoc at the MIT CSHub. “Surfaces with low albedo absorb more light and tend to be darker, while high-albedo surfaces are brighter and reflect more light.”

    Albedo is central to cool pavements. Typical paving surfaces, like conventional asphalt, possess a low albedo and absorb more radiation and emit more heat. Cool pavements, however, have brighter materials that reflect more than three times as much radiation and, consequently, re-emit far less heat.

    “We can build cool pavements in many different ways,” says Randolph Kirchain, a researcher in the Materials Science Laboratory and co-director of the Concrete Sustainability Hub. “Brighter materials like concrete and lighter-colored aggregates offer higher albedo, while existing asphalt pavements can be made ‘cool’ through reflective coatings.”

    CSHub researchers considered these several options in a study of Boston and Phoenix. Their analysis considered different outcomes when concrete, reflective asphalt, and reflective concrete replaced conventional asphalt pavements — which make up more than 95 percent of pavements worldwide.

    Situational awareness

    For a comprehensive understanding of the environmental benefits of cool pavements in Boston and Phoenix, researchers had to look beyond just paving materials. That’s because in addition to lowering air temperatures, cool pavements exert direct and indirect impacts on climate change.  

    “The one direct impact is radiative forcing,” notes AzariJafari. “By reflecting radiation back into the atmosphere, cool pavements exert a radiative forcing, meaning that they change the Earth’s energy balance by sending more energy out of the atmosphere — similar to the polar ice caps.”

    Cool pavements also exert complex, indirect climate change impacts by altering energy use in adjacent buildings.

    “On the one hand, by lowering temperatures, cool pavements can reduce some need for AC [air conditioning] in the summer while increasing heating demand in the winter,” says AzariJafari. “Conversely, by reflecting light — called incident radiation — onto nearby buildings, cool pavements can warm structures up, which can increase AC usage in the summer and lower heating demand in the winter.”

    What’s more, albedo effects are only a portion of the overall life cycle impacts of a cool pavement. In fact, impacts from construction and materials extraction (referred to together as embodied impacts) and the use of the pavement both dominate the life cycle. The primary use phase impact of a pavement — apart from albedo effects  — is excess fuel consumption: Pavements with smooth surfaces and stiff structures cause less excess fuel consumption in the vehicles that drive on them.

    Assessing the climate-change impacts of cool pavements, then, is an intricate process — one involving many trade-offs. In their study, the researchers sought to analyze and measure them.

    A full reflection

    To determine the ideal implementation of cool pavements in Boston and Phoenix, researchers investigated the life cycle impacts of shifting from conventional asphalt pavements to three cool pavement options: reflective asphalt, concrete, and reflective concrete.

    To do this, they used coupled physical simulations to model buildings in thousands of hypothetical neighborhoods. Using this data, they then trained a neural network model to predict impacts based on building and neighborhood characteristics. With this tool in place, it was possible to estimate the impact of cool pavements for each of the thousands of roads and hundreds of thousands of buildings in Boston and Phoenix.

    In addition to albedo effects, they also looked at the embodied impacts for all pavement types and the effect of pavement type on vehicle excess fuel consumption due to surface qualities, stiffness, and deterioration rate.

    After assessing the life cycle impacts of each cool pavement type, the researchers calculated which material — conventional asphalt, reflective asphalt, concrete, and reflective concrete — benefited each neighborhood most. They found that while cool pavements were advantageous in Boston and Phoenix overall, the ideal materials varied greatly within and between both cities.

    “One benefit that was universal across neighborhood type and paving material, was the impact of radiative forcing,” notes AzariJafari. “This was particularly the case in areas with shorter, less-dense buildings, where the effect was most pronounced.”

    Unlike radiative forcing, however, changes to building energy demand differed by location. In Boston, cool pavements reduced energy demand as often as they increased it across all neighborhoods. In Phoenix, cool pavements had a negative impact on energy demand in most census tracts due to incident radiation. When factoring in radiative forcing, though, cool pavements ultimately had a net benefit.

    Only after considering embodied emissions and impacts on fuel consumption did the ideal pavement type manifest for each neighborhood. Once factoring in uncertainty over the life cycle, researchers found that reflective concrete pavements had the best results, proving optimal in 53 percent and 73 percent of the neighborhoods in Boston and Phoenix, respectively.

    Once again, uncertainties and variations were identified. In Boston, replacing conventional asphalt pavements with a cool option was always preferred, while in Phoenix concrete pavements — reflective or not — had better outcomes due to rigidity at high temperatures that minimized vehicle fuel consumption. And despite the dominance of concrete in Phoenix, in 17 percent of its neighborhoods all reflective paving options proved more or less as effective, while in 1 percent of cases, conventional pavements were actually superior.

    “Though the climate change impacts we studied have proven numerous and often at odds with each other, our conclusions are unambiguous: Cool pavements could offer immense climate change mitigation benefits for both cities,” says Kirchain.

    The improvements to air temperatures would be noticeable: the team found that cool pavements would lower peak summer air temperatures in Boston by 1.7 C (3 F) and in Phoenix by 2.1 C (3.7 F). The carbon dioxide emissions reductions would likewise be impressive. Boston would decrease its carbon dioxide emissions by as much as 3 percent over 50 years while reductions in Phoenix would reach 6 percent over the same period.

    This analysis is one of the most comprehensive studies of cool pavements to date — but there’s more to investigate. Just as with pavements, it’s also possible to adjust building albedo, which may result in changes to building energy demand. Intensive grid decarbonization and the introduction of low-carbon concrete mixtures may also alter the emissions generated by cool pavements.

    There’s still lots of ground to cover for the CSHub team. But by studying cool pavements, they’ve elevated a brilliant climate change solution and opened avenues for further research and future mitigation.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    A peculiar state of matter in layers of semiconductors

    Scientists around the world are developing new hardware for quantum computers, a new type of device that could accelerate drug design, financial modeling, and weather prediction. These computers rely on qubits, bits of matter that can represent some combination of 1 and 0 simultaneously. The problem is that qubits are fickle, degrading into regular bits when interactions with surrounding matter interfere. But new research at MIT suggests a way to protect their states, using a phenomenon called many-body localization (MBL).

    MBL is a peculiar phase of matter, proposed decades ago, that is unlike solid or liquid. Typically, matter comes to thermal equilibrium with its environment. That’s why soup cools and ice cubes melt. But in MBL, an object consisting of many strongly interacting bodies, such as atoms, never reaches such equilibrium. Heat, like sound, consists of collective atomic vibrations and can travel in waves; an object always has such heat waves internally. But when there’s enough disorder and enough interaction in the way its atoms are arranged, the waves can become trapped, thus preventing the object from reaching equilibrium.

    MBL had been demonstrated in “optical lattices,” arrangements of atoms at very cold temperatures held in place using lasers. But such setups are impractical. MBL had also arguably been shown in solid systems, but only with very slow temporal dynamics, in which the phase’s existence is hard to prove because equilibrium might be reached if researchers could wait long enough. The MIT research found a signatures of MBL in a “solid-state” system — one made of semiconductors — that would otherwise have reached equilibrium in the time it was watched.

    “It could open a new chapter in the study of quantum dynamics,” says Rahul Nandkishore, a physicist at the University of Colorado at Boulder, who was not involved in the work.

    Mingda Li, the Norman C Rasmussen Assistant Professor Nuclear Science and Engineering at MIT, led the new study, published in a recent issue of Nano Letters. The researchers built a system containing alternating semiconductor layers, creating a microscopic lasagna — aluminum arsenide, followed by gallium arsenide, and so on, for 600 layers, each 3 nanometers (millionths of a millimeter) thick. Between the layers they dispersed “nanodots,” 2-nanometer particles of erbium arsenide, to create disorder. The lasagna, or “superlattice,” came in three recipes: one with no nanodots, one in which nanodots covered 8 percent of each layer’s area, and one in which they covered 25 percent.

    According to Li, the team used layers of material, instead of a bulk material, to simplify the system so dissipation of heat across the planes was essentially one-dimensional. And they used nanodots, instead of mere chemical impurities, to crank up the disorder.

    To measure whether these disordered systems are still staying in equilibrium, the researchers measured them with X-rays. Using the Advanced Photon Source at Argonne National Lab, they shot beams of radiation at an energy of more than 20,000 electron volts, and to resolve the energy difference between the incoming X-ray and after its reflection off the sample’s surface, with an energy resolution less than one one-thousandth of an electron volt. To avoid penetrating the superlattice and hitting the underlying substrate, they shot it at an angle of just half a degree from parallel.

    Just as light can be measured as waves or particles, so too can heat. The collective atomic vibration for heat in the form of a heat-carrying unit is called a phonon. X-rays interact with these phonons, and by measuring how X-rays reflect off the sample, the experimenters can determine if it is in equilibrium.

    The researchers found that when the superlattice was cold — 30 kelvin, about -400 degrees Fahrenheit — and it contained nanodots, its phonons at certain frequencies remained were not in equilibrium.

    More work remains to prove conclusively that MBL has been achieved, but “this new quantum phase can open up a whole new platform to explore quantum phenomena,” Li says, “with many potential applications, from thermal storage to quantum computing.”

    To create qubits, some quantum computers employ specks of matter called quantum dots. Li says quantum dots similar to Li’s nanodots could act as qubits. Magnets could read or write their quantum states, while the many-body localization would keep them insulated from heat and other environmental factors.

    In terms of thermal storage, such a superlattice might switch in and out of an MBL phase by magnetically controlling the nanodots. It could insulate computer parts from heat at one moment, then allow parts to disperse heat when it won’t cause damage. Or it could allow heat to build up and be harnessed later for generating electricity.

    Conveniently, superlattices with nanodots can be constructed using traditional techniques for fabricating semiconductors, alongside other elements of computer chips. According to Li, “It’s a much larger design space than with chemical doping, and there are numerous applications.”

    “I am excited to see that signatures of MBL can now also be found in real material systems,” says Immanuel Bloch, scientific director at the Max-Planck-Institute of Quantum Optics, of the new work. “I believe this will help us to better understand the conditions under which MBL can be observed in different quantum many-body systems and how possible coupling to the environment affects the stability of the system. These are fundamental and important questions and the MIT experiment is an important step helping us to answer them.”

    Funding was provided by the U.S. Department of Energy’s Basic Energy Sciences program’s Neutron Scattering Program. More

  • in

    Energy storage from a chemistry perspective

    The transition toward a more sustainable, environmentally sound electrical grid has driven an upsurge in renewables like solar and wind. But something as simple as cloud cover can cause grid instability, and wind power is inherently unpredictable. This intermittent nature of renewables has invigorated the competitive landscape for energy storage companies looking to enhance power system flexibility while enabling the integration of renewables.

    “Impact is what drives PolyJoule more than anything else,” says CEO Eli Paster. “We see impact from a renewable integration standpoint, from a curtailment standpoint, and also from the standpoint of transitioning from a centralized to a decentralized model of energy-power delivery.”

    PolyJoule is a Billerica, Massachusetts-based startup that’s looking to reinvent energy storage from a chemistry perspective. Co-founders Ian Hunter of MIT’s Department of Mechanical Engineering and Tim Swager of the Department of Chemistry are longstanding MIT professors considered luminaries in their respective fields. Meanwhile, the core team is a small but highly skilled collection of chemists, manufacturing specialists, supply chain optimizers, and entrepreneurs, many of whom have called MIT home at one point or another.

    “The ideas that we work on in the lab, you’ll see turned into products three to four years from now, and they will still be innovative and well ahead of the curve when they get to market,” Paster says. “But the concepts come from the foresight of thinking five to 10 years in advance. That’s what we have in our back pocket, thanks to great minds like Ian and Tim.”

    PolyJoule takes a systems-level approach married to high-throughput, analytical electrochemistry that has allowed the company to pinpoint a chemical cell design based on 10,000 trials. The result is a battery that is low-cost, safe, and has a long lifetime. It’s capable of responding to base loads and peak loads in microseconds, allowing the same battery to participate in multiple power markets and deployment use cases.

    In the energy storage sphere, interesting technologies abound, but workable solutions are few and far between. But Paster says PolyJoule has managed to bridge the gap between the lab and the real world by taking industry concerns into account from the beginning. “We’ve taken a slightly contrarian view to all of the other energy storage companies that have come before us that have said, ‘If we build it, they will come.’ Instead, we’ve gone directly to the customer and asked, ‘If you could have a better battery storage platform, what would it look like?’”

    With commercial input feeding into the thought processes behind their technological and commercial deployment, PolyJoule says they’ve designed a battery that is less expensive to make, less expensive to operate, safer, and easier to deploy.

    Traditionally, lithium-ion batteries have been the go-to energy storage solution. But lithium has its drawbacks, including cost, safety issues, and detrimental effects on the environment. But PolyJoule isn’t interested in lithium — or metals of any kind, in fact. “We start with the periodic table of organic elements,” says Paster, “and from there, we derive what works at economies of scale, what is easy to converge and convert chemically.”

    Having an inherently safer chemistry allows PolyJoule to save on system integration costs, among other things. PolyJoule batteries don’t contain flammable solvents, which means no added expenses related to fire mitigation. Safer chemistry also means ease of storage, and PolyJoule batteries are currently undergoing global safety certification (UL approval) to be allowed indoors and on airplanes. Finally, with high power built into the chemistry, PolyJoule’s cells can be charged and discharged to extremes, without the need for heating or cooling systems.

    “From raw material to product delivery, we examine each step in the value chain with an eye towards reducing costs,” says Paster. It all starts with designing the chemistry around earth-abundant elements, which allows the small startup to compete with larger suppliers, even at smaller scales. Consider the fact that PolyJoule’s differentiating material cost is less than $1 per kilogram, whereas lithium carbonate sells for $20 per kilogram.

    On the manufacturing side, Paster explains that PolyJoule cuts costs by making their cells in old paper mills and warehouses, employing off-the-shelf equipment previously used for tissue paper or newspaper printing. “We use equipment that has been around for decades because we don’t want to create a cutting-edge technology that requires cutting-edge manufacturing,” he says. “We want to create a cutting-edge technology that can be deployed in industrialized nations and in other nations that can benefit the most from energy storage.”

    PolyJoule’s first customer is an industrial distributed energy consumer with baseline energy consumption that increases by a factor of 10 when the heavy machinery kicks on twice a day. In the early morning and late afternoon, it consumes about 50 kilowatts for 20 minutes to an hour, compared to a baseline rate of 5  kilowatts. It’s an application model that is translatable to a variety of industries. Think wastewater treatment, food processing, and server farms — anything with a fluctuation in power consumption over a 24-hour period.

    By the end of the year, PolyJoule will have delivered its first 10 kilowatt-hour system, exiting stealth mode and adding commercial viability to demonstrated technological superiority. “What we’re seeing, now is massive amounts of energy storage being added to renewables and grid-edge applications,” says Paster. “We anticipated that by 12-18 months, and now we’re ramping up to catch up with some of the bigger players.” More

  • in

    Designing better batteries for electric vehicles

    The urgent need to cut carbon emissions is prompting a rapid move toward electrified mobility and expanded deployment of solar and wind on the electric grid. If those trends escalate as expected, the need for better methods of storing electrical energy will intensify.

    “We need all the strategies we can get to address the threat of climate change,” says Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. “Obviously, developing technologies for grid-based storage at a large scale is critical. But for mobile applications — in particular, transportation — much research is focusing on adapting today’s lithium-ion battery to make versions that are safer, smaller, and can store more energy for their size and weight.”

    Traditional lithium-ion batteries continue to improve, but they have limitations that persist, in part because of their structure. A lithium-ion battery consists of two electrodes — one positive and one negative — sandwiched around an organic (carbon-containing) liquid. As the battery is charged and discharged, electrically charged particles (or ions) of lithium pass from one electrode to the other through the liquid electrolyte.

    One problem with that design is that at certain voltages and temperatures, the liquid electrolyte can become volatile and catch fire. “Batteries are generally safe under normal usage, but the risk is still there,” says Kevin Huang PhD ’15, a research scientist in Olivetti’s group.

    Another problem is that lithium-ion batteries are not well-suited for use in vehicles. Large, heavy battery packs take up space and increase a vehicle’s overall weight, reducing fuel efficiency. But it’s proving difficult to make today’s lithium-ion batteries smaller and lighter while maintaining their energy density — that is, the amount of energy they store per gram of weight.

    To solve those problems, researchers are changing key features of the lithium-ion battery to make an all-solid, or “solid-state,” version. They replace the liquid electrolyte in the middle with a thin, solid electrolyte that’s stable at a wide range of voltages and temperatures. With that solid electrolyte, they use a high-capacity positive electrode and a high-capacity, lithium metal negative electrode that’s far thinner than the usual layer of porous carbon. Those changes make it possible to shrink the overall battery considerably while maintaining its energy-storage capacity, thereby achieving a higher energy density.

    “Those features — enhanced safety and greater energy density — are probably the two most-often-touted advantages of a potential solid-state battery,” says Huang. He then quickly clarifies that “all of these things are prospective, hoped-for, and not necessarily realized.” Nevertheless, the possibility has many researchers scrambling to find materials and designs that can deliver on that promise.

    Thinking beyond the lab

    Researchers have come up with many intriguing options that look promising — in the lab. But Olivetti and Huang believe that additional practical considerations may be important, given the urgency of the climate change challenge. “There are always metrics that we researchers use in the lab to evaluate possible materials and processes,” says Olivetti. Examples might include energy-storage capacity and charge/discharge rate. When performing basic research — which she deems both necessary and important — those metrics are appropriate. “But if the aim is implementation, we suggest adding a few metrics that specifically address the potential for rapid scaling,” she says.

    Based on industry’s experience with current lithium-ion batteries, the MIT researchers and their colleague Gerbrand Ceder, the Daniel M. Tellep Distinguished Professor of Engineering at the University of California at Berkeley, suggest three broad questions that can help identify potential constraints on future scale-up as a result of materials selection. First, with this battery design, could materials availability, supply chains, or price volatility become a problem as production scales up? (Note that the environmental and other concerns raised by expanded mining are outside the scope of this study.) Second, will fabricating batteries from these materials involve difficult manufacturing steps during which parts are likely to fail? And third, do manufacturing measures needed to ensure a high-performance product based on these materials ultimately lower or raise the cost of the batteries produced?

    To demonstrate their approach, Olivetti, Ceder, and Huang examined some of the electrolyte chemistries and battery structures now being investigated by researchers. To select their examples, they turned to previous work in which they and their collaborators used text- and data-mining techniques to gather information on materials and processing details reported in the literature. From that database, they selected a few frequently reported options that represent a range of possibilities.

    Materials and availability

    In the world of solid inorganic electrolytes, there are two main classes of materials — the oxides, which contain oxygen, and the sulfides, which contain sulfur. Olivetti, Ceder, and Huang focused on one promising electrolyte option in each class and examined key elements of concern for each of them.

    The sulfide they considered was LGPS, which combines lithium, germanium, phosphorus, and sulfur. Based on availability considerations, they focused on the germanium, an element that raises concerns in part because it’s not generally mined on its own. Instead, it’s a byproduct produced during the mining of coal and zinc.

    To investigate its availability, the researchers looked at how much germanium was produced annually in the past six decades during coal and zinc mining and then at how much could have been produced. The outcome suggested that 100 times more germanium could have been produced, even in recent years. Given that supply potential, the availability of germanium is not likely to constrain the scale-up of a solid-state battery based on an LGPS electrolyte.

    The situation looked less promising with the researchers’ selected oxide, LLZO, which consists of lithium, lanthanum, zirconium, and oxygen. Extraction and processing of lanthanum are largely concentrated in China, and there’s limited data available, so the researchers didn’t try to analyze its availability. The other three elements are abundantly available. However, in practice, a small quantity of another element — called a dopant — must be added to make LLZO easy to process. So the team focused on tantalum, the most frequently used dopant, as the main element of concern for LLZO.

    Tantalum is produced as a byproduct of tin and niobium mining. Historical data show that the amount of tantalum produced during tin and niobium mining was much closer to the potential maximum than was the case with germanium. So the availability of tantalum is more of a concern for the possible scale-up of an LLZO-based battery.

    But knowing the availability of an element in the ground doesn’t address the steps required to get it to a manufacturer. So the researchers investigated a follow-on question concerning the supply chains for critical elements — mining, processing, refining, shipping, and so on. Assuming that abundant supplies are available, can the supply chains that deliver those materials expand quickly enough to meet the growing demand for batteries?

    In sample analyses, they looked at how much supply chains for germanium and tantalum would need to grow year to year to provide batteries for a projected fleet of electric vehicles in 2030. As an example, an electric vehicle fleet often cited as a goal for 2030 would require production of enough batteries to deliver a total of 100 gigawatt hours of energy. To meet that goal using just LGPS batteries, the supply chain for germanium would need to grow by 50 percent from year to year — a stretch, since the maximum growth rate in the past has been about 7 percent. Using just LLZO batteries, the supply chain for tantalum would need to grow by about 30 percent — a growth rate well above the historical high of about 10 percent.

    Those examples demonstrate the importance of considering both materials availability and supply chains when evaluating different solid electrolytes for their scale-up potential. “Even when the quantity of a material available isn’t a concern, as is the case with germanium, scaling all the steps in the supply chain to match the future production of electric vehicles may require a growth rate that’s literally unprecedented,” says Huang.

    Materials and processing

    In assessing the potential for scale-up of a battery design, another factor to consider is the difficulty of the manufacturing process and how it may impact cost. Fabricating a solid-state battery inevitably involves many steps, and a failure at any step raises the cost of each battery successfully produced. As Huang explains, “You’re not shipping those failed batteries; you’re throwing them away. But you’ve still spent money on the materials and time and processing.”

    As a proxy for manufacturing difficulty, Olivetti, Ceder, and Huang explored the impact of failure rate on overall cost for selected solid-state battery designs in their database. In one example, they focused on the oxide LLZO. LLZO is extremely brittle, and at the high temperatures involved in manufacturing, a large sheet that’s thin enough to use in a high-performance solid-state battery is likely to crack or warp.

    To determine the impact of such failures on cost, they modeled four key processing steps in assembling LLZO-based batteries. At each step, they calculated cost based on an assumed yield — that is, the fraction of total units that were successfully processed without failing. With the LLZO, the yield was far lower than with the other designs they examined; and, as the yield went down, the cost of each kilowatt-hour (kWh) of battery energy went up significantly. For example, when 5 percent more units failed during the final cathode heating step, cost increased by about $30/kWh — a nontrivial change considering that a commonly accepted target cost for such batteries is $100/kWh. Clearly, manufacturing difficulties can have a profound impact on the viability of a design for large-scale adoption.

    Materials and performance

    One of the main challenges in designing an all-solid battery comes from “interfaces” — that is, where one component meets another. During manufacturing or operation, materials at those interfaces can become unstable. “Atoms start going places that they shouldn’t, and battery performance declines,” says Huang.

    As a result, much research is devoted to coming up with methods of stabilizing interfaces in different battery designs. Many of the methods proposed do increase performance; and as a result, the cost of the battery in dollars per kWh goes down. But implementing such solutions generally involves added materials and time, increasing the cost per kWh during large-scale manufacturing.

    To illustrate that trade-off, the researchers first examined their oxide, LLZO. Here, the goal is to stabilize the interface between the LLZO electrolyte and the negative electrode by inserting a thin layer of tin between the two. They analyzed the impacts — both positive and negative — on cost of implementing that solution. They found that adding the tin separator increases energy-storage capacity and improves performance, which reduces the unit cost in dollars/kWh. But the cost of including the tin layer exceeds the savings so that the final cost is higher than the original cost.

    In another analysis, they looked at a sulfide electrolyte called LPSCl, which consists of lithium, phosphorus, and sulfur with a bit of added chlorine. In this case, the positive electrode incorporates particles of the electrolyte material — a method of ensuring that the lithium ions can find a pathway through the electrolyte to the other electrode. However, the added electrolyte particles are not compatible with other particles in the positive electrode — another interface problem. In this case, a standard solution is to add a “binder,” another material that makes the particles stick together.

    Their analysis confirmed that without the binder, performance is poor, and the cost of the LPSCl-based battery is more than $500/kWh. Adding the binder improves performance significantly, and the cost drops by almost $300/kWh. In this case, the cost of adding the binder during manufacturing is so low that essentially all the of the cost decrease from adding the binder is realized. Here, the method implemented to solve the interface problem pays off in lower costs.

    The researchers performed similar studies of other promising solid-state batteries reported in the literature, and their results were consistent: The choice of battery materials and processes can affect not only near-term outcomes in the lab but also the feasibility and cost of manufacturing the proposed solid-state battery at the scale needed to meet future demand. The results also showed that considering all three factors together — availability, processing needs, and battery performance — is important because there may be collective effects and trade-offs involved.

    Olivetti is proud of the range of concerns the team’s approach can probe. But she stresses that it’s not meant to replace traditional metrics used to guide materials and processing choices in the lab. “Instead, it’s meant to complement those metrics by also looking broadly at the sorts of things that could get in the way of scaling” — an important consideration given what Huang calls “the urgent ticking clock” of clean energy and climate change.

    This research was supported by the Seed Fund Program of the MIT Energy Initiative (MITEI) Low-Carbon Energy Center for Energy Storage; by Shell, a founding member of MITEI; and by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, under the Advanced Battery Materials Research Program. The text mining work was supported by the National Science Foundation, the Office of Naval Research, and MITEI.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Why boiling droplets can race across hot oily surfaces

    When you’re frying something in a skillet and some droplets of water fall into the pan, you may have noticed those droplets skittering around on top of the film of hot oil. Now, that seemingly trivial phenomenon has been analyzed and understood for the first time by researchers at MIT — and may have important implications for microfluidic devices, heat transfer systems, and other useful functions.

    A droplet of boiling water on a hot surface will sometimes levitate on a thin vapor film, a well-studied phenomenon called the Leidenfrost effect. Because it is suspended on a cushion of vapor, the droplet can move across the surface with little friction. If the surface is coated with hot oil, which has much greater friction than the vapor film under a Leidenfrost droplet, the hot droplet should be expected to move much more slowly. But, counterintuitively, the series of experiments at MIT has showed that the opposite effect happens: The droplet on oil zooms away much more rapidly than on bare metal.

    This effect, which propels droplets across a heated oily surface 10 to 100 times faster than on bare metal, could potentially be used for self-cleaning or de-icing systems, or to propel tiny amounts of liquid through the tiny tubing of microfluidic devices used for biomedical and chemical research and testing. The findings are described today in a paper in the journal Physical Review Letters, written by graduate student Victor Julio Leon and professor of mechanical engineering Kripa Varanasi.

    In previous research, Varanasi and his team showed that it would be possible to harness this phenomenon for some of these potential applications, but the new work, producing such high velocities (approximately 50 times faster), could open up even more new uses, Varanasi says.

    After long and painstaking analysis, Leon and Varanasi were able to determine the reason for the rapid ejection of these droplets from the hot surface. Under the right conditions of high temperature, oil viscosity, and oil thickness, the oil will form a kind of thin cloak coating the outside of each water droplet. As the droplet heats up, tiny bubbles of vapor form along the interface between the droplet and the oil. Because these minuscule bubbles accumulate randomly along the droplet’s base, asymmetries develop, and the lowered friction under the bubble loosens the droplet’s attachment to the surface and propels it away.

    The oily film acts almost like the rubber of a balloon, and when the tiny vapor bubbles burst through, they impart a force and “the balloon just flies off because the air is going out one side, creating a momentum transfer,” Varanasi says. Without the oil cloak, the vapor bubbles would just flow out of the droplet in all directions, preventing self-propulsion, but the cloaking effect holds them in like the skin of the balloon.

    Researchers used extreme high-speed photography to reveal the details of the moving droplets. “You can actually see the fluctuations on the surface,” graduate student Victor Leon says.

    The phenomenon sounds simple, but it turns out to depend on a complex interplay between events happening at different timescales.

    This newly analyzed self-ejection phenomenon depends on a number of factors, including the droplet size, the thickness and viscosity of the oil film, the thermal conductivity of the surface, the surface tension of the different liquids in the system, the type of oil, and the texture of the surface.

    In their experiments, the lowest viscosity of the several oils they tested was about 100 times more viscous than the surrounding air. So, it would have been expected to make bubbles move much more slowly than on the air cushion of the Leidenfrost effect. “That gives an idea of how surprising it is that this droplet is moving faster,” Leon says.

    As boiling starts, bubbles will randomly form from some nucleation site that is not right at its center. Bubble formation will increase on that side, leading to the propulsion off in one direction. So far, the researchers have not been able to control the direction of that randomly induced propulsion, but they are now working on some possible ways to control the directionality in the future. “We have ideas of how to trigger the propulsion in controlled directions,” Leon says.

    Remarkably, the tests showed that even though the oil film of the surface, which was a silicon wafer, was only 10 to 100 microns thick — about the thickness of a human hair — its behavior didn’t match the equations for a thin film. Instead, because of the vaporization the film, it was actually behaving like an infinitely deep pool of oil. “We were kind of astounded” by that finding, Leon says. While a thin film should have caused it to stick, the virtually infinite pool gave the droplet much lower friction, allowing it to move more rapidly than expected, Leon says.

    The effect depends on the fact that the formation of the tiny bubbles is a much more rapid process than the transfer of heat through the oil film, about a thousand times faster, leaving plenty of time for the asymmetries within the droplet to accumulate. When the bubbles of vapor initially form at the oil-water interface, they are  much more insulating that the liquid of the droplet, leading to significant thermal disturbances in the oil film. These disturbances cause the droplet to vibrate, reducing friction and increasing vaporization rate.

    It took extreme high-speed photography to reveal the details of this rapid effect, Leon says, using a 100,000 frames per second video camera. “You can actually see the fluctuations on the surface,” Leon says.

    Initially, Varanasi says, “we were stumped at multiple levels as to what was going on, because the effect was so unexpected. … It’s a fairly complex answer to what may look seemingly simple, but it really creates this fast propulsion.”

    In practice, the effect means that in certain situations, a simple heating of a surface, by the right amount and with the right kind of oily coating, could cause corrosive scaling drops to be cleared from a surface. Further down the line, once the researchers have more control over directionality, the system could potentially substitute for some high-tech pumps in microfluidic devices to propel droplets through the right tubes at the right time. This might be especially useful in microgravity situations, where ordinary pumps don’t function as usual.

    It may also be possible to attach a payload to the droplets, creating a kind of microscale robotic delivery system, Varanasi says. And while their tests focused on water droplets, potentially it could apply to many different kinds of liquids and sublimating solids, he says.

    The work was supported by the National Science Foundation. More

  • in

    MIT Solar Electric Vehicle Team wins 2021 American Solar Challenge

    After three years of hard work, the MIT Solar Electric Vehicle Team took first place at the 2021 American Solar Challenge (ASC) on August 7 in the Single Occupancy Vehicle (SOV) category. During the five-day race, their solar car, Nimbus — designed and built entirely by students — beat eight other SOVs from schools across the country, traversing 1,109 miles and maintaining an average speed of 38.4 miles per hour.

    Held every two years, the ASC has traditionally been a timed event. This year, however, the race was based on the total distance traveled. Each team followed the same prescribed route, from Independence, Missouri, to Las Vegas, New Mexico. But teams could drive additional miles within each of the three stages — if their battery had enough juice to continue. Nimbus surpassed the closest runner-up, the University of Kentucky, by over 100 miles.

    “It’s still a little surreal,” says SEVT captain Aditya Mehrotra, a rising senior in electrical engineering and computer science. “We were all hopeful, but I don’t think you ever go into racing like, ‘We got this.’ It’s more like, ‘We’re going to do our best and see how we fare.’ In this case, we were fortunate enough to do really well. The car worked beautifully, and — more importantly — the team worked beautifully and we learned a lot.”

    Team work makes the dream work

    Two weeks before the ASC race, each solar car was put through its paces in the Formula Sun Grand Prix at Heartland Motorsports Park in Topeka, Kansas. First, vehicles had to perform a series of qualifying challenges, called “scrutineering.” Cars that passed could participate in a track race in hopes of qualifying for ASC. Nimbus placed second, completing a total of 239 laps around the track over three days (equivalent to 597.5 miles).

    In the process, SEVT member and rising junior in mechanical engineering Cameron Kokesh tied the Illinois State driver for the fastest single lap time around the track, clocking in at three minutes and 19 seconds. She’s not one to rest on her laurels, though. “It would be fun to see if we could beat that time at the next race,” she says with a smile.

    Nimbus’s performance at the Formula Sun Grand Prix and ASC is a manifestation of team’s proficiency in not only designing and building a superior solar vehicle, but other skills, as well, including managing logistics, communications, and teamwork. “It’s a huge operation,” says Mehrotra. “It’s not like we drive the car straight down the highway during the race.”

    Indeed, Nimbus travels with an impressive caravan of seven vehicles manned by about two dozen SEVT members. A scout vehicle is at the front, monitoring road and weather conditions, followed by a lead car that oversees navigation. Nimbus is third in the caravan, trailed by a chase vehicle, in which the strategy team manages tasks like monitoring telemetry data, calculating how much power the solar panels are generating and the remaining travel distance, and setting target speeds. Bringing up the rear are the transport truck and trailer, a media car, and “Cupcake,” a support vehicle with food, supplies, and camping gear.

    Leading up to the three-week event, the team devoted three years to designing, building, refining, and testing Nimbus. (The ASC was scheduled for 2020, but it was postponed until this year due to the Covid-19 pandemic.) They spent countless hours in the MIT Edgerton Center’s machine shop in Building N51, making, building, and iterating. They drove the car in the greater-Boston area, up to Salem, Massachusetts, and to Cape Cod. In the spring, they traveled to Palmer Motorsports Park in Palmer, Massachusetts, to practice various components of the race. They performed scrutineering tasks like the slalom test and figure eight test, conducted team operations training to optimize the caravan’s performance, and, of course, the “shakedown.” 

    “Shakedown is just, you drive the car around the track and you basically see what falls off and then you know what you need to fix,” Mehrotra explains. “Hopefully nothing too major falls off!”

    The road ahead

    At the conclusion of the race, Mehotra officially stepped down and handed SEVT’s reins to its new leaders: Kotesh will take the helm as team captain, and rising sophomore Sydney Kim, an ocean engineering major, will serve as vice-captain. The long drive back from the Midwest gave them time to reflect on the win and future plans.

    Although Nimbus performed well, there were a few instructive glitches here and there, mostly during scrutineering. But there was nothing the team couldn’t handle. For example, the canopy latch didn’t always hold, so the clear acrylic bubble covering the driver would pop open. (A little spring adjustment and tape did the trick.) In addition, Nimbus had a tendency to skid when the driver slammed on the brakes. (Driver training, and letting some air out of the tires, improved the traction.)

    Then there were the unpredictable variables, beyond the team’s control. On one day, with little sun, Nimbus had to chug along the highway at a mere 15 miles per hour. And there was the time that the Kansas State Police pulled the entire caravan over. “They didn’t realize we were coming through,” Mehrotra explains.

    Kim thinks one of the keys to the team’s success is that Nimbus is quite reliable. “We didn’t have wheels falling off on the road. Once we got the car rolling, things didn’t go wrong mechanically or electrically. Also, it’s very energy efficient because it’s lightweight and the shape of the vehicle is very aerodynamic. On a nice sunny day, it allows us to drive 40 miles per hour energy-neutral — the battery stays at the same amount of charge as we drive,” she says.

    The next ASC will take place in 2022, so this year the team will focus on refining Nimbus to race it again next summer. Also, they’ve set their sights on building a car to enter in the Multiple Occupancy Vehicle (MOV) class in the 2024 race — something the team has never done. “It will definitely take the three years to build a good car to compete,” Kotesh muses. “But it’s a really good transition period, after doing so well on this race, so our team is excited about it.”

    “It will be challenging for them, but I wouldn’t put it anything past them,” says Patrick McAtamney, the Edgerton Center technical instructor and shop manager who works with all the student clubs and teams, from solar vehicles to Formula race cars to rockets. He attended ASC, too, and has the utmost admiration for SEVT. “It’s totally student-run. They do all the designing and machining themselves. I always tell people that sometimes I feel like my only job is to make sure they have 10 fingers when they leave the shop.”

    In the meantime, before the school year begins, SEVT has another challenge: deciding where to put the trophy. “It’s huge,” McAtamney says. “It’s about the size of the Stanley Cup!” More