More stories

  • in

    Using combustion to make better batteries

    For more than a century, much of the world has run on the combustion of fossil fuels. Now, to avert the threat of climate change, the energy system is changing. Notably, solar and wind systems are replacing fossil fuel combustion for generating electricity and heat, and batteries are replacing the internal combustion engine for powering vehicles. As the energy transition progresses, researchers worldwide are tackling the many challenges that arise.

    Sili Deng has spent her career thinking about combustion. Now an assistant professor in the MIT Department of Mechanical Engineering and the Class of 1954 Career Development Professor, Deng leads a group that, among other things, develops theoretical models to help understand and control combustion systems to make them more efficient and to control the formation of emissions, including particles of soot.

    “So we thought, given our background in combustion, what’s the best way we can contribute to the energy transition?” says Deng. In considering the possibilities, she notes that combustion refers only to the process — not to what’s burning. “While we generally think of fossil fuels when we think of combustion, the term ‘combustion’ encompasses many high-temperature chemical reactions that involve oxygen and typically emit light and large amounts of heat,” she says.

    Given that definition, she saw another role for the expertise she and her team have developed: They could explore the use of combustion to make materials for the energy transition. Under carefully controlled conditions, combusting flames can be used to produce not polluting soot, but rather valuable materials, including some that are critical in the manufacture of lithium-ion batteries.

    Improving the lithium-ion battery by lowering costs

    The demand for lithium-ion batteries is projected to skyrocket in the coming decades. Batteries will be needed to power the growing fleet of electric cars and to store the electricity produced by solar and wind systems so it can be delivered later when those sources aren’t generating. Some experts project that the global demand for lithium-ion batteries may increase tenfold or more in the next decade.

    Given such projections, many researchers are looking for ways to improve the lithium-ion battery technology. Deng and her group aren’t materials scientists, so they don’t focus on making new and better battery chemistries. Instead, their goal is to find a way to lower the high cost of making all of those batteries. And much of the cost of making a lithium-ion battery can be traced to the manufacture of materials used to make one of its two electrodes — the cathode.

    The MIT researchers began their search for cost savings by considering the methods now used to produce cathode materials. The raw materials are typically salts of several metals, including lithium, which provides ions — the electrically charged particles that move when the battery is charged and discharged. The processing technology aims to produce tiny particles, each one made up of a mixture of those ingredients, with the atoms arranged in the specific crystalline structure that will deliver the best performance in the finished battery.

    For the past several decades, companies have manufactured those cathode materials using a two-stage process called coprecipitation. In the first stage, the metal salts — excluding the lithium — are dissolved in water and thoroughly mixed inside a chemical reactor. Chemicals are added to change the acidity (the pH) of the mixture, and particles made up of the combined salts precipitate out of the solution. The particles are then removed, dried, ground up, and put through a sieve.

    A change in pH won’t cause lithium to precipitate, so it is added in the second stage. Solid lithium is ground together with the particles from the first stage until lithium atoms permeate the particles. The resulting material is then heated, or “annealed,” to ensure complete mixing and to achieve the targeted crystalline structure. Finally, the particles go through a “deagglomerator” that separates any particles that have joined together, and the cathode material emerges.

    Coprecipitation produces the needed materials, but the process is time-consuming. The first stage takes about 10 hours, and the second stage requires about 13 hours of annealing at a relatively low temperature (750 degrees Celsius). In addition, to prevent cracking during annealing, the temperature is gradually “ramped” up and down, which takes another 11 hours. The process is thus not only time-consuming but also energy-intensive and costly.

    For the past two years, Deng and her group have been exploring better ways to make the cathode material. “Combustion is very effective at oxidizing things, and the materials for lithium-ion batteries are generally mixtures of metal oxides,” says Deng. That being the case, they thought this could be an opportunity to use a combustion-based process called flame synthesis.

    A new way of making a high-performance cathode material

    The first task for Deng and her team — mechanical engineering postdoc Jianan Zhang, Valerie L. Muldoon ’20, SM ’22, and current graduate students Maanasa Bhat and Chuwei Zhang — was to choose a target material for their study. They decided to focus on a mixture of metal oxides consisting of nickel, cobalt, and manganese plus lithium. Known as “NCM811,” this material is widely used and has been shown to produce cathodes for batteries that deliver high performance; in an electric vehicle, that means a long driving range, rapid discharge and recharge, and a long lifetime. To better define their target, the researchers examined the literature to determine the composition and crystalline structure of NCM811 that has been shown to deliver the best performance as a cathode material.

    They then considered three possible approaches to improving on the coprecipitation process for synthesizing NCM811: They could simplify the system (to cut capital costs), speed up the process, or cut the energy required.

    “Our first thought was, what if we can mix together all of the substances — including the lithium — at the beginning?” says Deng. “Then we would not need to have the two stages” — a clear simplification over coprecipitation.

    Introducing FASP

    One process widely used in the chemical and other industries to fabricate nanoparticles is a type of flame synthesis called flame-assisted spray pyrolysis, or FASP. Deng’s concept for using FASP to make their targeted cathode powders proceeds as follows.

    The precursor materials — the metal salts (including the lithium) — are mixed with water, and the resulting solution is sprayed as fine droplets by an atomizer into a combustion chamber. There, a flame of burning methane heats up the mixture. The water evaporates, leaving the precursor materials to decompose, oxidize, and solidify to form the powder product. The cyclone separates particles of different sizes, and the baghouse filters out those that aren’t useful. The collected particles would then be annealed and deagglomerated.

    To investigate and optimize this concept, the researchers developed a lab-scale FASP setup consisting of a homemade ultrasonic nebulizer, a preheating section, a burner, a filter, and a vacuum pump that withdraws the powders that form. Using that system, they could control the details of the heating process: The preheating section replicates conditions as the material first enters the combustion chamber, and the burner replicates conditions as it passes the flame. That setup allowed the team to explore operating conditions that would give the best results.

    Their experiments showed marked benefits over coprecipitation. The nebulizer breaks up the liquid solution into fine droplets, ensuring atomic-level mixing. The water simply evaporates, so there’s no need to change the pH or to separate the solids from a liquid. As Deng notes, “You just let the gas go, and you’re left with the particles, which is what you want.” With lithium included at the outset, there’s no need for mixing solids with solids, which is neither efficient 
nor effective.

    They could even control the structure, or “morphology,” of the particles that formed. In one series of experiments, they tried exposing the incoming spray to different rates of temperature change over time. They found that the temperature “history” has a direct impact on morphology. With no preheating, the particles burst apart; and with rapid preheating, the particles were hollow. The best outcomes came when they used temperatures ranging from 175-225 C. Experiments with coin-cell batteries (laboratory devices used for testing battery materials) confirmed that by adjusting the preheating temperature, they could achieve a particle morphology that would optimize the performance of their materials.

    Best of all, the particles formed in seconds. Assuming the time needed for conventional annealing and deagglomerating, the new setup could synthesize the finished cathode material in half the total time needed for coprecipitation. Moreover, the first stage of the coprecipitation system is replaced by a far simpler setup — a savings in capital costs.

    “We were very happy,” says Deng. “But then we thought, if we’ve changed the precursor side so the lithium is mixed well with the salts, do we need to have the same process for the second stage? Maybe not!”

    Improving the second stage

    The key time- and energy-consuming step in the second stage is the annealing. In today’s coprecipitation process, the strategy is to anneal at a low temperature for a long time, giving the operator time to manipulate and control the process. But running a furnace for some 20 hours — even at a low temperature — consumes a lot of energy.

    Based on their studies thus far, Deng thought, “What if we slightly increase the temperature but reduce the annealing time by orders of magnitude? Then we could cut energy consumption, and we might still achieve the desired crystal structure.”

    However, experiments at slightly elevated temperatures and short treatment times didn’t bring the results they had hoped for. In transmission electron microscope (TEM) images, the particles that formed had clouds of light-looking nanoscale particles attached to their surfaces. When the researchers performed the same experiments without adding the lithium, those nanoparticles didn’t appear. Based on that and other tests, they concluded that the nanoparticles were pure lithium. So, it seemed like long-duration annealing would be needed to ensure that the lithium made its way inside the particles.

    But they then came up with a different solution to the lithium-distribution problem. They added a small amount — just 1 percent by weight — of an inexpensive compound called urea to their mixture. In TEM images of the particles formed, the “undesirable nanoparticles were largely gone,” says Deng.

    Experiments in the laboratory coin cells showed that the addition of urea significantly altered the response to changes in the annealing temperature. When the urea was absent, raising the annealing temperature led to a dramatic decline in performance of the cathode material that formed. But with the urea present, the performance of the material that formed was unaffected by any temperature change.

    That result meant that — as long as the urea was added with the other precursors — they could push up the temperature, shrink the annealing time, and omit the gradual ramp-up and cool-down process. Further imaging studies confirmed that their approach yields the desired crystal structure and the homogeneous elemental distribution of the cobalt, nickel, manganese, and lithium within the particles. Moreover, in tests of various performance measures, their materials did as well as materials produced by coprecipitation or by other methods using long-time heat treatment. Indeed, the performance was comparable to that of commercial batteries with cathodes made of NCM811.

    So now the long and expensive second stage required in standard coprecipitation could be replaced by just 20 minutes of annealing at about 870 C plus 20 minutes of cooling down at room temperature.

    Theory, continuing work, and planning for scale-up

    While experimental evidence supports their approach, Deng and her group are now working to understand why it works. “Getting the underlying physics right will help us design the process to control the morphology and to scale up the process,” says Deng. And they have a hypothesis for why the lithium nanoparticles in their flame synthesis process end up on the surfaces of the larger particles — and why the presence of urea solves that problem.

    According to their theory, without the added urea, the metal and lithium atoms are initially well-mixed within the droplet. But as heating progresses, the lithium diffuses to the surface and ends up as nanoparticles attached to the solidified particle. As a result, a long annealing process is needed to move the lithium in among the other atoms.

    When the urea is present, it starts out mixed with the lithium and other atoms inside the droplet. As temperatures rise, the urea decomposes, forming bubbles. As heating progresses, the bubbles burst, increasing circulation, which keeps the lithium from diffusing to the surface. The lithium ends up uniformly distributed, so the final heat treatment can be very short.

    The researchers are now designing a system to suspend a droplet of their mixture so they can observe the circulation inside it, with and without the urea present. They’re also developing experiments to examine how droplets vaporize, employing tools and methods they have used in the past to study how hydrocarbons vaporize inside internal combustion engines.

    They also have ideas about how to streamline and scale up their process. In coprecipitation, the first stage takes 10 to 20 hours, so one batch at a time moves on to the second stage to be annealed. In contrast, the novel FASP process generates particles in 20 minutes or less — a rate that’s consistent with continuous processing. In their design for an “integrated synthesis system,” the particles coming out of the baghouse are deposited on a belt that carries them for 10 or 20 minutes through a furnace. A deagglomerator then breaks any attached particles apart, and the cathode powder emerges, ready to be fabricated into a high-performance cathode for a lithium-ion battery. The cathode powders for high-performance lithium-ion batteries would thus be manufactured at unprecedented speed, low cost, and low energy use.

    Deng notes that every component in their integrated system is already used in industry, generally at a large scale and high flow-through rate. “That’s why we see great potential for our technology to be commercialized and scaled up,” she says. “Where our expertise comes into play is in designing the combustion chamber to control the temperature and heating rate so as to produce particles with the desired morphology.” And while a detailed economic analysis has yet to be performed, it seems clear that their technique will be faster, the equipment simpler, and the energy use lower than other methods of manufacturing cathode materials for lithium-ion batteries — potentially a major contribution to the ongoing energy transition.

    This research was supported by the MIT Department of Mechanical Engineering.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Preparing students for the new nuclear

    As nuclear power has gained greater recognition as a zero-emission energy source, the MIT Leaders for Global Operations (LGO) program has taken notice.

    Two years ago, LGO began a collaboration with MIT’s Department of Nuclear Science and Engineering (NSE) as a way to showcase the vital contribution of both business savvy and scientific rigor that LGO’s dual-degree graduates can offer this growing field.

    “We saw that the future of fission and fusion required business acumen and management acumen,” says Professor Anne White, NSE department head. “People who are going to be leaders in our discipline, and leaders in the nuclear enterprise, are going to need all of the technical pieces of the puzzle that our engineering department can provide in terms of education and training. But they’re also going to need a much broader perspective on how the technology connects with society through the lens of business.”

    The resulting response has been positive: “Companies are seeing the value of nuclear technology for their operations,” White says, and this often happens in unexpected ways.

    For example, graduate student Santiago Andrade recently completed a research project at Caterpillar Inc., a preeminent manufacturer of mining and construction equipment. Caterpillar is one of more than 20 major companies that partner with the LGO program, offering six-month internships to each student. On the surface, it seemed like an improbable pairing; what could Andrade, who was pursuing his master’s in nuclear science and engineering, do for a manufacturing company? However, Caterpillar wanted to understand the technical and commercial feasibility of using nuclear energy to power mining sites and data centers when wind and solar weren’t viable.

    “They are leaving no stone unturned in the search of financially smart solutions that can support the transition to a clean energy dependency,” Andrade says. “My project, along with many others’, is part of this effort.”

    “The research done through the LGO program with Santiago is enabling Caterpillar to understand how alternative technologies, like the nuclear microreactor, could participate in these markets in the future,” says Brian George, product manager for large electric power solutions at Caterpillar. “Our ability to connect our customers with the research will provide for a more accurate understanding of the potential opportunity, and helps provide exposure for our customers to emerging technologies.”

    With looming threats of climate change, White says, “We’re going to require more opportunities for nuclear technologies to step in and be part of those solutions. A cohort of LGO graduates will come through this program with technical expertise — a master’s degree in nuclear engineering — and an MBA. There’s going to be a tremendous talent pool out there to help companies and governments.”

    Andrade, who completed an undergraduate degree in chemical engineering and had a strong background in thermodynamics, applied to LGO unsure of which track to choose, but he knew he wanted to confront the world’s energy challenge. When MIT Admissions suggested that he join LGO’s new nuclear track, he was intrigued by how it could further his career.

    “Since the NSE department offers opportunities ranging from energy to health care and from quantum engineering to regulatory policy, the possibilities of career tracks after graduation are countless,” he says.

    He was also inspired by the fact that, as he says, “Nuclear is one of the less-popular solutions in terms of our energy transition journey. One of the things that attracted me is that it’s not one of the most popular, but it’s one of the most useful.”

    In addition to his work at Caterpillar, Andrade connected deeply with professors. He worked closely with professors Jacopo Buongiorno and John Parsons as a research assistant, helping them develop a business model to successfully support the deployment of nuclear microreactors. After graduation, he plans to work in the clean energy sector with an eye to innovations in the nuclear energy technology space.

    His LGO classmate, Lindsey Kennington, a control systems engineer, echoes his sentiments: This is a revolutionary time for nuclear technology.

    “Before MIT, I worked on a lot of nuclear waste or nuclear weapons-related projects. All of them were fission-related. I got disillusioned because of all the bureaucracy and the regulation,” Kennington says. “However, now there are a lot of new nuclear technologies coming straight out of MIT. Commonwealth Fusion Systems, a fusion startup, represents a prime example of MIT’s close relationship to new nuclear tech. Small modular reactors are another emerging technology being developed by MIT. Exposure to these cutting-edge technologies was the main sell factor for me.”

    Kennington conducted an internship with National Grid, where she used her expertise to evaluate how existing nuclear power plants could generate hydrogen. At MIT, she studied nuclear and energy policy, which offered her additional perspective that traditional engineering classes might not have provided. Because nuclear power has long been a hot-button issue, Kennington was able to gain nuanced insight about the pathways and roadblocks to its implementation.

    “I don’t think that other engineering departments emphasize that focus on policy quite as much. [Those classes] have been one of the most enriching parts of being in the nuclear department,” she says.

    Most of all, she says, it’s a pivotal time to be part of a new, blossoming program at the forefront of clean energy, especially as fusion research grows more prevalent.

    “We’re at an inflection point,” she says. “Whether or not we figure out fusion in the next five, 10, or 20 years, people are going to be working on it — and it’s a really exciting time to not only work on the science but to actually help the funding and business side grow.”

    White puts it simply.

    “This is not your parents’ nuclear,” she says. “It’s something totally different. Our discipline is evolving so rapidly that people who have technical expertise in nuclear will have a huge advantage in this next generation.” More

  • in

    How to pull carbon dioxide out of seawater

    As carbon dioxide continues to build up in the Earth’s atmosphere, research teams around the world have spent years seeking ways to remove the gas efficiently from the air. Meanwhile, the world’s number one “sink” for carbon dioxide from the atmosphere is the ocean, which soaks up some 30 to 40 percent of all of the gas produced by human activities.

    Recently, the possibility of removing carbon dioxide directly from ocean water has emerged as another promising possibility for mitigating CO2 emissions, one that could potentially someday even lead to overall net negative emissions. But, like air capture systems, the idea has not yet led to any widespread use, though there are a few companies attempting to enter this area.

    Now, a team of researchers at MIT says they may have found the key to a truly efficient and inexpensive removal mechanism. The findings were reported this week in the journal Energy and Environmental Science, in a paper by MIT professors T. Alan Hatton and Kripa Varanasi, postdoc Seoni Kim, and graduate students Michael Nitzsche, Simon Rufer, and Jack Lake.

    The existing methods for removing carbon dioxide from seawater apply a voltage across a stack of membranes to acidify a feed stream by water splitting. This converts bicarbonates in the water to molecules of CO2, which can then be removed under vacuum. Hatton, who is the Ralph Landau Professor of Chemical Engineering, notes that the membranes are expensive, and chemicals are required to drive the overall electrode reactions at either end of the stack, adding further to the expense and complexity of the processes. “We wanted to avoid the need for introducing chemicals to the anode and cathode half cells and to avoid the use of membranes if at all possible” he says.

    The team came up with a reversible process consisting of membrane-free electrochemical cells. Reactive electrodes are used to release protons to the seawater fed to the cells, driving the release of the dissolved carbon dioxide from the water. The process is cyclic: It first acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. Then, the water is fed to a second set of cells with a reversed voltage, to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea. Periodically, the roles of the two cells are reversed once one set of electrodes is depleted of protons (during acidification) and the other has been regenerated during alkalization.

    This removal of carbon dioxide and reinjection of alkaline water could slowly start to reverse, at least locally, the acidification of the oceans that has been caused by carbon dioxide buildup, which in turn has threatened coral reefs and shellfish, says Varanasi, a professor of mechanical engineering. The reinjection of alkaline water could be done through dispersed outlets or far offshore to avoid a local spike of alkalinity that could disrupt ecosystems, they say.

    “We’re not going to be able to treat the entire planet’s emissions,” Varanasi says. But the reinjection might be done in some cases in places such as fish farms, which tend to acidify the water, so this could be a way of helping to counter that effect.

    Once the carbon dioxide is removed from the water, it still needs to be disposed of, as with other carbon removal processes. For example, it can be buried in deep geologic formations under the sea floor, or it can be chemically converted into a compound like ethanol, which can be used as a transportation fuel, or into other specialty chemicals. “You can certainly consider using the captured CO2 as a feedstock for chemicals or materials production, but you’re not going to be able to use all of it as a feedstock,” says Hatton. “You’ll run out of markets for all the products you produce, so no matter what, a significant amount of the captured CO2 will need to be buried underground.”

    Initially at least, the idea would be to couple such systems with existing or planned infrastructure that already processes seawater, such as desalination plants. “This system is scalable so that we could integrate it potentially into existing processes that are already processing ocean water or in contact with ocean water,” Varanasi says. There, the carbon dioxide removal could be a simple add-on to existing processes, which already return vast amounts of water to the sea, and it would not require consumables like chemical additives or membranes.

    “With desalination plants, you’re already pumping all the water, so why not co-locate there?” Varanasi says. “A bunch of capital costs associated with the way you move the water, and the permitting, all that could already be taken care of.”

    The system could also be implemented by ships that would process water as they travel, in order to help mitigate the significant contribution of ship traffic to overall emissions. There are already international mandates to lower shipping’s emissions, and “this could help shipping companies offset some of their emissions, and turn ships into ocean scrubbers,” Varanasi says.

    The system could also be implemented at locations such as offshore drilling platforms, or at aquaculture farms. Eventually, it could lead to a deployment of free-standing carbon removal plants distributed globally.

    The process could be more efficient than air-capture systems, Hatton says, because the concentration of carbon dioxide in seawater is more than 100 times greater than it is in air. In direct air-capture systems it is first necessary to capture and concentrate the gas before recovering it. “The oceans are large carbon sinks, however, so the capture step has already kind of been done for you,” he says. “There’s no capture step, only release.” That means the volumes of material that need to be handled are much smaller, potentially simplifying the whole process and reducing the footprint requirements.

    The research is continuing, with one goal being to find an alternative to the present step that requires a vacuum to remove the separated carbon dioxide from the water. Another need is to identify operating strategies to prevent precipitation of minerals that can foul the electrodes in the alkalinization cell, an inherent issue that reduces the overall efficiency in all reported approaches. Hatton notes that significant progress has been made on these issues, but that it is still too early to report on them. The team expects that the system could be ready for a practical demonstration project within about two years.

    “The carbon dioxide problem is the defining problem of our life, of our existence,” Varanasi says. “So clearly, we need all the help we can get.”

    The work was supported by ARPA-E. More

  • in

    Responsive design meets responsibility for the planet’s future

    MIT senior Sylas Horowitz kneeled at the edge of a marsh, tinkering with a blue-and-black robot about the size and shape of a shoe box and studded with lights and mini propellers.

    The robot was a remotely operated vehicle (ROV) — an underwater drone slated to collect water samples from beneath a sheet of Arctic ice. But its pump wasn’t working, and its intake line was clogged with sand and seaweed.

    “Of course, something must always go wrong,” Horowitz, a mechanical engineering major with minors in energy studies and environment and sustainability, later blogged about the Falmouth, Massachusetts, field test. By making some adjustments, Horowitz was able to get the drone functioning on site.

    Through a 2020 collaboration between MIT’s Department of Mechanical Engineering and the Woods Hole Oceanographic Institute (WHOI), Horowitz had been assembling and retrofitting the high-performance ROV to measure the greenhouse gases emitted by thawing permafrost.

    The Arctic’s permafrost holds an estimated 1,700 billion metric tons of methane and carbon dioxide — roughly 50 times the amount of carbon tied to fossil fuel emissions in 2019, according to climate research from NASA’s Jet Propulsion Laboratory. WHOI scientists wanted to understand the role the Arctic plays as a greenhouse gas source or sink.

    Horowitz’s ROV would be deployed from a small boat in sub-freezing temperatures to measure carbon dioxide and methane in the water. Meanwhile, a flying drone would sample the air.

    An MIT Student Sustainability Coalition leader and one of the first members of the MIT Environmental Solutions Initiative’s Rapid Response Group, Horowitz has focused on challenges related to clean energy, climate justice, and sustainable development.

    In addition to the ROV, Horowitz has tackled engineering projects through D-Lab, where community partners from around the world work with MIT students on practical approaches to alleviating global poverty. Horowitz worked on fashioning waste bins out of heat-fused recycled plastic for underserved communities in Liberia. Their thesis project, also initiated through D-Lab, is designing and building user-friendly, space- and fuel-efficient firewood cook stoves to improve the lives of women in Santa Catarina Palopó in northern Guatemala.

    Through the Tata-MIT GridEdge Solar Research program, they helped develop flexible, lightweight solar panels to mount on the roofs of street vendors’ e-rickshaws in Bihar, India.

    The thread that runs through Horowitz’s projects is user-centered design that creates a more equitable society. “In the transition to sustainable energy, we want our technology to adapt to the society that we live in,” they say. “Something I’ve learned from the D-Lab projects and also from the ROV project is that when you’re an engineer, you need to understand the societal and political implications of your work, because all of that should get factored into the design.”

    Horowitz describes their personal mission as creating systems and technology that “serve the well-being and longevity of communities and the ecosystems we exist within.

    “I want to relate mechanical engineering to sustainability and environmental justice,” they say. “Engineers need to think about how technology fits into the greater societal context of people in the environment. We want our technology to adapt to the society we live in and for people to be able, based on their needs, to interface with the technology.”

    Imagination and inspiration

    In Dix Hills, New York, a Long Island suburb, Horowitz’s dad is in banking and their mom is a speech therapist. The family hiked together, but Horowitz doesn’t tie their love for the natural world to any one experience. “I like to play in the dirt,” they say. “I’ve always had a connection to nature. It was a kind of childlike wonder.”

    Seeing footage of the massive 2010 oil spill in the Gulf of Mexico caused by an explosion on the Deepwater Horizon oil rig — which occurred when Horowitz was around 10 — was a jarring introduction to how human activity can impact the health of the planet.

    Their first interest was art — painting and drawing portraits, album covers, and more recently, digital images such as a figure watering a houseplant at a window while lightning flashes outside; a neon pink jellyfish in a deep blue sea; and, for an MIT-wide Covid quarantine project, two figures watching the sun set over a Green Line subway platform.

    Art dovetailed into a fascination with architecture, then shifted to engineering. In high school, Horowitz and a friend were co-captains of an all-girls robotics team. “It was just really wonderful, having this community and being able to build stuff,” they say. Horowitz and another friend on the team learned they were accepted to MIT on Pi Day 2018.

    Art, architecture, engineering — “it’s all kind of the same,” Horowitz says. “I like the creative aspect of design, being able to create things out of imagination.”

    Sustaining political awareness

    At MIT, Horowitz connected with a like-minded community of makers. They also launched themself into taking action against environmental injustice.

    In 2022, through the Student Sustainability Coalition (SSC), they encouraged MIT students to get involved in advocating for the Cambridge Green New Deal, legislation aimed at reducing emissions from new large commercial buildings such as those owned by MIT and creating a green jobs training program.

    In February 2022, Horowitz took part in a sit-in in Building 3 as part of MIT Divest, a student-led initiative urging the MIT administration to divest its endowment of fossil fuel companies.

    “I want to see MIT students more locally involved in politics around sustainability, not just the technology side,” Horowitz says. “I think there’s a lot of power from students coming together. They could be really influential.”

    User-oriented design

    The Arctic underwater ROV Horowitz worked on had to be waterproof and withstand water temperatures as low as 5 degrees Fahrenheit. It was tethered to a computer by a 150-meter-long cable that had to spool and unspool without tangling. The pump and tubing that collected water samples had to work without kinking.

    “It was cool, throughout the project, to think, ‘OK, what kind of needs will these scientists have when they’re out in these really harsh conditions in the Arctic? How can I make a machine that will make their field work easier?’

    “I really like being able to design things directly with the users, working within their design constraints,” they say.

    Inevitably, snafus occurred, but in photos and videos taken the day of the Falmouth field tests, Horowitz is smiling. “Here’s a fun unexpected (or maybe quite expected) occurrence!” they reported later. “The plastic mount for the shaft collar [used in the motor’s power transmission] ripped itself apart!” Undaunted, Horowitz jury-rigged a replacement out of sheet metal.

    Horowitz replaced broken wires in the winch-like device that spooled the cable. They added a filter at the intake to prevent sand and plants from clogging the pump.

    With a few more tweaks, the ROV was ready to descend into frigid waters. Last summer, it was successfully deployed on a field run in the Canadian high Arctic. A few months later, Horowitz was slated to attend OCEANS 2022 Hampton Roads, their first professional conference, to present a poster on their contribution to the WHOI permafrost research.

    Ultimately, Horowitz hopes to pursue a career in renewable energy, sustainable design, or sustainable agriculture, or perhaps graduate studies in data science or econometrics to quantify environmental justice issues such as the disproportionate exposure to pollution among certain populations and the effect of systemic changes designed to tackle these issues.

    After completing their degree this month, Horowitz will spend six months with MIT International Science and Technology Initiatives (MISTI), which fosters partnerships with industry leaders and host organizations around the world.

    Horowitz is thinking of working with a renewable energy company in Denmark, one of the countries they toured during a summer 2019 field trip led by the MIT Energy Initiative’s Director of Education Antje Danielson. They were particularly struck by Samsø, the world’s first carbon-neutral island, run entirely on renewable energy. “It inspired me to see what’s out there when I was a sophomore,” Horowitz says. They’re ready to see where inspiration takes them next.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    To decarbonize the chemical industry, electrify it

    The chemical industry is the world’s largest industrial energy consumer and the third-largest source of industrial emissions, according to the International Energy Agency. In 2019, the industrial sector as a whole was responsible for 24 percent of global greenhouse gas emissions. And yet, as the world races to find pathways to decarbonization, the chemical industry has been largely untouched.

    “When it comes to climate action and dealing with the emissions that come from the chemical sector, the slow pace of progress is partly technical and partly driven by the hesitation on behalf of policymakers to overly impact the economic competitiveness of the sector,” says Dharik Mallapragada, a principal research scientist at the MIT Energy Initiative.

    With so many of the items we interact with in our daily lives — from soap to baking soda to fertilizer — deriving from products of the chemical industry, the sector has become a major source of economic activity and employment for many nations, including the United States and China. But as the global demand for chemical products continues to grow, so do the industry’s emissions.

    New sustainable chemical production methods need to be developed and deployed and current emission-intensive chemical production technologies need to be reconsidered, urge the authors of a new paper published in Joule. Researchers from DC-MUSE, a multi-institution research initiative, argue that electrification powered by low-carbon sources should be viewed more broadly as a viable decarbonization pathway for the chemical industry. In this paper, they shine a light on different potential methods to do just that.

    “Generally, the perception is that electrification can play a role in this sector — in a very narrow sense — in that it can replace fossil fuel combustion by providing the heat that the combustion is providing,” says Mallapragada, a member of DC-MUSE. “What we argue is that electrification could be much more than that.”

    The researchers outline four technological pathways — ranging from more mature, near-term options to less technologically mature options in need of research investment — and present the opportunities and challenges associated with each.

    The first two pathways directly replace fossil fuel-produced heat (which facilitates the reactions inherent in chemical production) with electricity or electrochemically generated hydrogen. The researchers suggest that both options could be deployed now and potentially be used to retrofit existing facilities. Electrolytic hydrogen is also highlighted as an opportunity to replace fossil fuel-produced hydrogen (a process that emits carbon dioxide) as a critical chemical feedstock. In 2020, fossil-based hydrogen supplied nearly all hydrogen demand (90 megatons) in the chemical and refining industries — hydrogen’s largest consumers.

    The researchers note that increasing the role of electricity in decarbonizing the chemical industry will directly affect the decarbonization of the power grid. They stress that to successfully implement these technologies, their operation must coordinate with the power grid in a mutually beneficial manner to avoid overburdening it. “If we’re going to be serious about decarbonizing the sector and relying on electricity for that, we have to be creative in how we use it,” says Mallapragada. “Otherwise we run the risk of having addressed one problem, while creating a massive problem for the grid in the process.”

    Electrified processes have the potential to be much more flexible than conventional fossil fuel-driven processes. This can reduce the cost of chemical production by allowing producers to shift electricity consumption to times when the cost of electricity is low. “Process flexibility is particularly impactful during stressed power grid conditions and can help better accommodate renewable generation resources, which are intermittent and are often poorly correlated with daily power grid cycles,” says Yury Dvorkin, an associate research professor at the Johns Hopkins Ralph O’Connor Sustainable Energy Institute. “It’s beneficial for potential adopters because it can help them avoid consuming electricity during high-price periods.”

    Dvorkin adds that some intermediate energy carriers, such as hydrogen, can potentially be used as highly efficient energy storage for day-to-day operations and as long-term energy storage. This would help support the power grid during extreme events when traditional and renewable generators may be unavailable. “The application of long-duration storage is of particular interest as this is a key enabler of a low-emissions society, yet not widespread beyond pumped hydro units,” he says. “However, as we envision electrified chemical manufacturing, it is important to ensure that the supplied electricity is sourced from low-emission generators to prevent emissions leakages from the chemical to power sector.” 

    The next two pathways introduced — utilizing electrochemistry and plasma — are less technologically mature but have the potential to replace energy- and carbon-intensive thermochemical processes currently used in the industry. By adopting electrochemical processes or plasma-driven reactions instead, chemical transformations can occur at lower temperatures and pressures, potentially enhancing efficiency. “These reaction pathways also have the potential to enable more flexible, grid-responsive plants and the deployment of modular manufacturing plants that leverage distributed chemical feedstocks such as biomass waste — further enhancing sustainability in chemical manufacturing,” says Miguel Modestino, the director of the Sustainable Engineering Initiative at the New York University Tandon School of Engineering.

    A large barrier to deep decarbonization of chemical manufacturing relates to its complex, multi-product nature. But, according to the researchers, each of these electricity-driven pathways supports chemical industry decarbonization for various feedstock choices and end-of-life disposal decisions. Each should be evaluated in comprehensive techno-economic and environmental life cycle assessments to weigh trade-offs and establish suitable cost and performance metrics.

    Regardless of the pathway chosen, the researchers stress the need for active research and development and deployment of these technologies. They also emphasize the importance of workforce training and development running in parallel to technology development. As André Taylor, the director of DC-MUSE, explains, “There is a healthy skepticism in the industry regarding electrification and adoption of these technologies, as it involves processing chemicals in a new way.” The workforce at different levels of the industry hasn’t necessarily been exposed to ideas related to the grid, electrochemistry, or plasma. The researchers say that workforce training at all levels will help build greater confidence in these different solutions and support customer-driven industry adoption.

    “There’s no silver bullet, which is kind of the standard line with all climate change solutions,” says Mallapragada. “Each option has pros and cons, as well as unique advantages. But being aware of the portfolio of options in which you can use electricity allows us to have a better chance of success and of reducing emissions — and doing so in a way that supports grid decarbonization.”

    This work was supported, in part, by the Alfred P. Sloan Foundation. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    A new way to assess radiation damage in reactors

    A new method could greatly reduce the time and expense needed for certain important safety checks in nuclear power reactors. The approach could save money and increase total power output in the short run, and it might increase plants’ safe operating lifetimes in the long run.

    One of the most effective ways to control greenhouse gas emissions, many analysts argue, is to prolong the lifetimes of existing nuclear power plants. But extending these plants beyond their originally permitted operating lifetimes requires monitoring the condition of many of their critical components to ensure that damage from heat and radiation has not led, and will not lead, to unsafe cracking or embrittlement.

    Today, testing of a reactor’s stainless steel components — which make up much of the plumbing systems that prevent heat buildup, as well as many other parts — requires removing test pieces, known as coupons, of the same kind of steel that are left adjacent to the actual components so they experience the same conditions. Or, it requires the removal of a tiny piece of the actual operating component. Both approaches are done during costly shutdowns of the reactor, prolonging these scheduled outages and costing millions of dollars per day.

    Now, researchers at MIT and elsewhere have come up with a new, inexpensive, hands-off test that can produce similar information about the condition of these reactor components, with far less time required during a shutdown. The findings are reported today in the journal Acta Materiala in a paper by MIT professor of nuclear science and engineering Michael Short; Saleem Al Dajani ’19 SM ’20, who did his master’s work at MIT on this project and is now a doctoral student at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia; and 13 others at MIT and other institutions.

    The test involves aiming laser beams at the stainless steel material, which generates surface acoustic waves (SAWs) on the surface. Another set of laser beams is then used to detect and measure the frequencies of these SAWs. Tests on material aged identically to nuclear power plants showed that the waves produced a distinctive double-peaked spectral signature when the material was degraded.

    Short and Al Dajani embarked on the process in 2018, looking for a more rapid way to detect a specific kind of degradation, called spinodal decomposition, that can take place in austenitic stainless steel, which is used for components such as the 2- to 3-foot wide pipes that carry coolant water to and from the reactor core. This process can lead to embrittlement, cracking, and potential failure in the event of an emergency.

    While spinodal decomposition is not the only type of degradation that can occur in reactor components, it is a primary concern for the lifetime and sustainability of nuclear reactors, Short says.

    “We were looking for a signal that can link material embrittlement with properties we can measure, that can be used to estimate lifetimes of structural materials,” Al Dajani says.

    They decided to try a technique Short and his students and collaborators had expanded upon, called transient grating spectroscopy, or TGS, on samples of reactor materials known to have experienced spinodal decomposition as a result of their reactor-like thermal aging history. The method uses laser beams to stimulate, and then measure, SAWs on a material. The idea was that the decomposition should slow down the rate of heat flow through the material, that slowdown would be detectable by the TGS method.

    However, it turns out there was no such slowdown. “We went in with a hypothesis about what we would see, and we were wrong,” Short says.

    That’s often the way things work out in science, he says. “You go in guns blazing, looking for a certain thing, for a great reason, and you turn out to be wrong. But if you look carefully, you find other patterns in the data that reveal what nature actually has to say.”

    Instead, what showed up in the data was that, while a material would usually produce a single frequency peak for the material’s SAWs, in the degraded samples there was a splitting into two peaks.

    “It was a very clear pattern in the data,” Short recalls. “We just didn’t expect it, but it was right there screaming at us in the measurements.”

    Cast austenitic stainless steels like those used in reactor components are what’s known as duplex steels, actually a mixture of two different crystal structures in the same material by design. But while one of the two types is quite impervious to spinodal decomposition, the other is quite vulnerable to it. When the material starts to degrade, the difference shows up in the different frequency responses of the material, which is what the team found in their data.

    That finding was a total surprise, though. “Some of my current and former students didn’t believe it was happening,” Short says. “We were unable to convince our own team this was happening, with the initial statistics we had.” So, they went back and carried out further tests, which continued to strengthen the significance of the results. They reached a point where the confidence level was 99.9 percent that spinodal decomposition was indeed coincident with the wave peak separation.

    “Our discussions with those who opposed our initial hypotheses ended up taking our work to the next level,” Al Dajani says.

    The tests they did used large lab-based lasers and optical systems, so the next step, which the researchers are hard at work on, is miniaturizing the whole system into something that can be an easily portable test kit to use to check reactor components on-site, reducing the length of shutdowns. “We’re making great strides, but we still have some way to go,” he says.

    But when they achieve that next step, he says, it could make a significant difference. “Every day that your nuclear plant goes down, for a typical gigawatt-scale reactor, you lose about $2 million a day in lost electricity,” Al Dajani says, “so shortening outages is a huge thing in the industry right now.”

    He adds that the team’s goal was to find ways to enable existing plants to operate longer: “Let them be down for less time and be as safe or safer than they are right now — not cutting corners, but using smart science to get us the same information with far less effort.” And that’s what this new technique seems to offer.

    Short hopes that this could help to enable the extension of power plant operating licenses for some additional decades without compromising safety, by enabling frequent, simple and inexpensive testing of the key components. Existing, large-scale plants “generate just shy of a billion dollars in carbon-free electricity per plant each year,” he says, whereas bringing a new plant online can take more than a decade. “To bridge that gap, keeping our current nukes online is the single biggest thing we can do to fight climate change.”

    The team included researchers at MIT, Idaho National Laboratory, Manchester University and Imperial College London in the UK, Oak Ridge National Laboratory, the Electric Power Research Institute, Northeastern University, the University of California at Berkeley, and KAUST. The work was supported by the International Design Center at MIT and the Singapore University of Technology and Design, the U.S. Nuclear Regulatory Commission, and the U.S. National Science Foundation. More

  • in

    New MIT internships expand research opportunities in Africa

    With new support from the Office of the Associate Provost for International Activities, MIT International Science and Technology Initiatives (MISTI) and the MIT-Africa program are expanding internship opportunities for MIT students at universities and leading academic research centers in Africa. This past summer, MISTI supported 10 MIT student interns at African universities, significantly more than in any previous year.

    “These internships are an opportunity to better merge the research ecosystem of MIT with academia-based research systems in Africa,” says Evan Lieberman, the Total Professor of Political Science and Contemporary Africa and faculty director for MISTI.

    For decades, MISTI has helped MIT students to learn and explore through international experiential learning opportunities and internships in industries like health care, education, agriculture, and energy. MISTI’s MIT-Africa Seed Fund supports collaborative research between MIT faculty and Africa-based researchers, and the new student research internship opportunities are part of a broader vision for deeper engagement between MIT and research institutions across the African continent.

    While Africa is home to 12.5 percent of the world’s population, it generates less than 1 percent of scientific research output in the form of academic journal publications, according to the African Academy of Sciences. Research internships are one way that MIT can build mutually beneficial partnerships across Africa’s research ecosystem, to advance knowledge and spawn innovation in fields important to MIT and its African counterparts, including health care, biotechnology, urban planning, sustainable energy, and education.

    Ari Jacobovits, managing director of MIT-Africa, notes that the new internships provide additional funding to the lab hosting the MIT intern, enabling them to hire a counterpart student research intern from the local university. This support can make the internships more financially feasible for host institutions and helps to grow the research pipeline.

    With the support of MIT, State University of Zanzibar (SUZA) lecturers Raya Ahmada and Abubakar Bakar were able to hire local students to work alongside MIT graduate students Mel Isidor and Rajan Hoyle. Together the students collaborated over a summer on a mapping project designed to plan and protect Zanzibar’s coastal economy.

    “It’s been really exciting to work with research peers in a setting where we can all learn alongside one another and develop this project together,” says Hoyle.

    Using low-cost drone technology, the students and their local counterparts worked to create detailed maps of Zanzibar to support community planning around resilience projects designed to combat coastal flooding and deforestation and assess climate-related impacts to seaweed farming activities. 

    “I really appreciated learning about how engagement happens in this particular context and how community members understand local environmental challenges and conditions based on research and lived experience,” says Isidor. “This is beneficial for us whether we’re working in an international context or in the United States.”

    For biology major Shaida Nishat, her internship at the University of Cape Town allowed her to work in a vital sphere of public health and provided her with the chance to work with a diverse, international team headed by Associate Professor Salome Maswine, head of the global surgery division and a widely-renowned expert in global surgery, a multidisciplinary field in the sphere of global health focused on improved and equitable surgical outcomes.

    “It broadened my perspective as to how an effort like global surgery ties so many nations together through a common goal that would benefit them all,” says Nishat, who plans to pursue a career in public health.

    For computer science sophomore Antonio L. Ortiz Bigio, the MISTI research internship in Africa was an incomparable experience, culturally and professionally. Bigio interned at the Robotics Autonomous Intelligence and Learning Laboratory at the University of Witwatersrand in Johannesburg, led by Professor Benjamin Rosman, where he developed software to enable a robot to play chess. The experience has inspired Bigio to continue to pursue robotics and machine learning.

    Participating faculty at the host institutions welcomed their MIT interns, and were impressed by their capabilities. Both Rosman and Maswime described their MIT interns as hard-working and valued team members, who had helped to advance their own work.  

    Building strong global partnerships, whether through faculty research, student internships, or other initiatives, takes time and cultivation, explains Jacobovits. Each successful collaboration helps to seed future exchanges and builds interest at MIT and peer institutions in creative partnerships. As MIT continues to deepen its connections to institutions and researchers across Africa, says Jacobovits, “students like Shaida, Rajan, Mel, and Antonio are really effective ambassadors in building those networks.” More