More stories

  • in

    Study of disordered rock salts leads to battery breakthrough

    For the past decade, disordered rock salt has been studied as a potential breakthrough cathode material for use in lithium-ion batteries and a key to creating low-cost, high-energy storage for everything from cell phones to electric vehicles to renewable energy storage.A new MIT study is making sure the material fulfills that promise.Led by Ju Li, the Tokyo Electric Power Company Professor in Nuclear Engineering and professor of materials science and engineering, a team of researchers describe a new class of partially disordered rock salt cathode, integrated with polyanions — dubbed disordered rock salt-polyanionic spinel, or DRXPS — that delivers high energy density at high voltages with significantly improved cycling stability.“There is typically a trade-off in cathode materials between energy density and cycling stability … and with this work we aim to push the envelope by designing new cathode chemistries,” says Yimeng Huang, a postdoc in the Department of Nuclear Science and Engineering and first author of a paper describing the work published today in Nature Energy. “(This) material family has high energy density and good cycling stability because it integrates two major types of cathode materials, rock salt and polyanionic olivine, so it has the benefits of both.”Importantly, Li adds, the new material family is primarily composed of manganese, an earth-abundant element that is significantly less expensive than elements like nickel and cobalt, which are typically used in cathodes today.“Manganese is at least five times less expensive than nickel, and about 30 times less expensive than cobalt,” Li says. “Manganese is also the one of the keys to achieving higher energy densities, so having that material be much more earth-abundant is a tremendous advantage.”A possible path to renewable energy infrastructureThat advantage will be particularly critical, Li and his co-authors wrote, as the world looks to build the renewable energy infrastructure needed for a low- or no-carbon future.Batteries are a particularly important part of that picture, not only for their potential to decarbonize transportation with electric cars, buses, and trucks, but also because they will be essential to addressing the intermittency issues of wind and solar power by storing excess energy, then feeding it back into the grid at night or on calm days, when renewable generation drops.Given the high cost and relative rarity of materials like cobalt and nickel, they wrote, efforts to rapidly scale up electric storage capacity would likely lead to extreme cost spikes and potentially significant materials shortages.“If we want to have true electrification of energy generation, transportation, and more, we need earth-abundant batteries to store intermittent photovoltaic and wind power,” Li says. “I think this is one of the steps toward that dream.”That sentiment was shared by Gerbrand Ceder, the Samsung Distinguished Chair in Nanoscience and Nanotechnology Research and a professor of materials science and engineering at the University of California at Berkeley.“Lithium-ion batteries are a critical part of the clean energy transition,” Ceder says. “Their continued growth and price decrease depends on the development of inexpensive, high-performance cathode materials made from earth-abundant materials, as presented in this work.”Overcoming obstacles in existing materialsThe new study addresses one of the major challenges facing disordered rock salt cathodes — oxygen mobility.While the materials have long been recognized for offering very high capacity — as much as 350 milliampere-hour per gram — as compared to traditional cathode materials, which typically have capacities of between 190 and 200 milliampere-hour per gram, it is not very stable.The high capacity is contributed partially by oxygen redox, which is activated when the cathode is charged to high voltages. But when that happens, oxygen becomes mobile, leading to reactions with the electrolyte and degradation of the material, eventually leaving it effectively useless after prolonged cycling.To overcome those challenges, Huang added another element — phosphorus — that essentially acts like a glue, holding the oxygen in place to mitigate degradation.“The main innovation here, and the theory behind the design, is that Yimeng added just the right amount of phosphorus, formed so-called polyanions with its neighboring oxygen atoms, into a cation-deficient rock salt structure that can pin them down,” Li explains. “That allows us to basically stop the percolating oxygen transport due to strong covalent bonding between phosphorus and oxygen … meaning we can both utilize the oxygen-contributed capacity, but also have good stability as well.”That ability to charge batteries to higher voltages, Li says, is crucial because it allows for simpler systems to manage the energy they store.“You can say the quality of the energy is higher,” he says. “The higher the voltage per cell, then the less you need to connect them in series in the battery pack, and the simpler the battery management system.”Pointing the way to future studiesWhile the cathode material described in the study could have a transformative impact on lithium-ion battery technology, there are still several avenues for study going forward.Among the areas for future study, Huang says, are efforts to explore new ways to fabricate the material, particularly for morphology and scalability considerations.“Right now, we are using high-energy ball milling for mechanochemical synthesis, and … the resulting morphology is non-uniform and has small average particle size (about 150 nanometers). This method is also not quite scalable,” he says. “We are trying to achieve a more uniform morphology with larger particle sizes using some alternate synthesis methods, which would allow us to increase the volumetric energy density of the material and may allow us to explore some coating methods … which could further improve the battery performance. The future methods, of course, should be industrially scalable.”In addition, he says, the disordered rock salt material by itself is not a particularly good conductor, so significant amounts of carbon — as much as 20 weight percent of the cathode paste — were added to boost its conductivity. If the team can reduce the carbon content in the electrode without sacrificing performance, there will be higher active material content in a battery, leading to an increased practical energy density.“In this paper, we just used Super P, a typical conductive carbon consisting of nanospheres, but they’re not very efficient,” Huang says. “We are now exploring using carbon nanotubes, which could reduce the carbon content to just 1 or 2 weight percent, which could allow us to dramatically increase the amount of the active cathode material.”Aside from decreasing carbon content, making thick electrodes, he adds, is yet another way to increase the practical energy density of the battery. This is another area of research that the team is working on.“This is only the beginning of DRXPS research, since we only explored a few chemistries within its vast compositional space,” he continues. “We can play around with different ratios of lithium, manganese, phosphorus, and oxygen, and with various combinations of other polyanion-forming elements such as boron, silicon, and sulfur.”With optimized compositions, more scalable synthesis methods, better morphology that allows for uniform coatings, lower carbon content, and thicker electrodes, he says, the DRXPS cathode family is very promising in applications of electric vehicles and grid storage, and possibly even in consumer electronics, where the volumetric energy density is very important.This work was supported with funding from the Honda Research Institute USA Inc. and the Molecular Foundry at Lawrence Berkeley National Laboratory, and used resources of the National Synchrotron Light Source II at Brookhaven National Laboratory and the Advanced Photon Source at Argonne National Laboratory.  More

  • in

    Study reveals the benefits and downside of fasting

    Low-calorie diets and intermittent fasting have been shown to have numerous health benefits: They can delay the onset of some age-related diseases and lengthen lifespan, not only in humans but many other organisms.Many complex mechanisms underlie this phenomenon. Previous work from MIT has shown that one way fasting exerts its beneficial effects is by boosting the regenerative abilities of intestinal stem cells, which helps the intestine recover from injuries or inflammation.In a study of mice, MIT researchers have now identified the pathway that enables this enhanced regeneration, which is activated once the mice begin “refeeding” after the fast. They also found a downside to this regeneration: When cancerous mutations occurred during the regenerative period, the mice were more likely to develop early-stage intestinal tumors.“Having more stem cell activity is good for regeneration, but too much of a good thing over time can have less favorable consequences,” says Omer Yilmaz, an MIT associate professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the new study.Yilmaz adds that further studies are needed before forming any conclusion as to whether fasting has a similar effect in humans.“We still have a lot to learn, but it is interesting that being in either the state of fasting or refeeding when exposure to mutagen occurs can have a profound impact on the likelihood of developing a cancer in these well-defined mouse models,” he says.MIT postdocs Shinya Imada and Saleh Khawaled are the lead authors of the paper, which appears today in Nature.Driving regenerationFor several years, Yilmaz’s lab has been investigating how fasting and low-calorie diets affect intestinal health. In a 2018 study, his team reported that during a fast, intestinal stem cells begin to use lipids as an energy source, instead of carbohydrates. They also showed that fasting led to a significant boost in stem cells’ regenerative ability.However, unanswered questions remained: How does fasting trigger this boost in regenerative ability, and when does the regeneration begin?“Since that paper, we’ve really been focused on understanding what is it about fasting that drives regeneration,” Yilmaz says. “Is it fasting itself that’s driving regeneration, or eating after the fast?”In their new study, the researchers found that stem cell regeneration is suppressed during fasting but then surges during the refeeding period. The researchers followed three groups of mice — one that fasted for 24 hours, another one that fasted for 24 hours and then was allowed to eat whatever they wanted during a 24-hour refeeding period, and a control group that ate whatever they wanted throughout the experiment.The researchers analyzed intestinal stem cells’ ability to proliferate at different time points and found that the stem cells showed the highest levels of proliferation at the end of the 24-hour refeeding period. These cells were also more proliferative than intestinal stem cells from mice that had not fasted at all.“We think that fasting and refeeding represent two distinct states,” Imada says. “In the fasted state, the ability of cells to use lipids and fatty acids as an energy source enables them to survive when nutrients are low. And then it’s the postfast refeeding state that really drives the regeneration. When nutrients become available, these stem cells and progenitor cells activate programs that enable them to build cellular mass and repopulate the intestinal lining.”Further studies revealed that these cells activate a cellular signaling pathway known as mTOR, which is involved in cell growth and metabolism. One of mTOR’s roles is to regulate the translation of messenger RNA into protein, so when it’s activated, cells produce more protein. This protein synthesis is essential for stem cells to proliferate.The researchers showed that mTOR activation in these stem cells also led to production of large quantities of polyamines — small molecules that help cells to grow and divide.“In the refed state, you’ve got more proliferation, and you need to build cellular mass. That requires more protein, to build new cells, and those stem cells go on to build more differentiated cells or specialized intestinal cell types that line the intestine,” Khawaled says.Too much of a good thingThe researchers also found that when stem cells are in this highly regenerative state, they are more prone to become cancerous. Intestinal stem cells are among the most actively dividing cells in the body, as they help the lining of the intestine completely turn over every five to 10 days. Because they divide so frequently, these stem cells are the most common source of precancerous cells in the intestine.In this study, the researchers discovered that if they turned on a cancer-causing gene in the mice during the refeeding stage, they were much more likely to develop precancerous polyps than if the gene was turned on during the fasting state. Cancer-linked mutations that occurred during the refeeding state were also much more likely to produce polyps than mutations that occurred in mice that did not undergo the cycle of fasting and refeeding.“I want to emphasize that this was all done in mice, using very well-defined cancer mutations. In humans it’s going to be a much more complex state,” Yilmaz says. “But it does lead us to the following notion: Fasting is very healthy, but if you’re unlucky and you’re refeeding after a fasting, and you get exposed to a mutagen, like a charred steak or something, you might actually be increasing your chances of developing a lesion that can go on to give rise to cancer.”Yilmaz also noted that the regenerative benefits of fasting could be significant for people who undergo radiation treatment, which can damage the intestinal lining, or other types of intestinal injury. His lab is now studying whether polyamine supplements could help to stimulate this kind of regeneration, without the need to fast.“This fascinating study provides insights into the complex interplay between food consumption, stem cell biology, and cancer risk,” says Ophir Klein, a professor of medicine at the University of California at San Francisco and Cedars-Sinai Medical Center, who was not involved in the study. “Their work lays a foundation for testing polyamines as compounds that may augment intestinal repair after injuries, and it suggests that careful consideration is needed when planning diet-based strategies for regeneration to avoid increasing cancer risk.”The research was funded, in part, by a Pew-Stewart Trust Scholar award, the Marble Center for Cancer Nanomedicine, the Koch Institute-Dana Farber/Harvard Cancer Center Bridge Project, and the MIT Stem Cell Initiative. More

  • in

    MIT engineers’ new theory could improve the design and operation of wind farms

    The blades of propellers and wind turbines are designed based on aerodynamics principles that were first described mathematically more than a century ago. But engineers have long realized that these formulas don’t work in every situation. To compensate, they have added ad hoc “correction factors” based on empirical observations.Now, for the first time, engineers at MIT have developed a comprehensive, physics-based model that accurately represents the airflow around rotors even under extreme conditions, such as when the blades are operating at high forces and speeds, or are angled in certain directions. The model could improve the way rotors themselves are designed, but also the way wind farms are laid out and operated. The new findings are described today in the journal Nature Communications, in an open-access paper by MIT postdoc Jaime Liew, doctoral student Kirby Heck, and Michael Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering.“We’ve developed a new theory for the aerodynamics of rotors,” Howland says. This theory can be used to determine the forces, flow velocities, and power of a rotor, whether that rotor is extracting energy from the airflow, as in a wind turbine, or applying energy to the flow, as in a ship or airplane propeller. “The theory works in both directions,” he says.Because the new understanding is a fundamental mathematical model, some of its implications could potentially be applied right away. For example, operators of wind farms must constantly adjust a variety of parameters, including the orientation of each turbine as well as its rotation speed and the angle of its blades, in order to maximize power output while maintaining safety margins. The new model can provide a simple, speedy way of optimizing those factors in real time.“This is what we’re so excited about, is that it has immediate and direct potential for impact across the value chain of wind power,” Howland says.Modeling the momentumKnown as momentum theory, the previous model of how rotors interact with their fluid environment — air, water, or otherwise — was initially developed late in the 19th century. With this theory, engineers can start with a given rotor design and configuration, and determine the maximum amount of power that can be derived from that rotor — or, conversely, if it’s a propeller, how much power is needed to generate a given amount of propulsive force.Momentum theory equations “are the first thing you would read about in a wind energy textbook, and are the first thing that I talk about in my classes when I teach about wind power,” Howland says. From that theory, physicist Albert Betz calculated in 1920 the maximum amount of energy that could theoretically be extracted from wind. Known as the Betz limit, this amount is 59.3 percent of the kinetic energy of the incoming wind.But just a few years later, others found that the momentum theory broke down “in a pretty dramatic way” at higher forces that correspond to faster blade rotation speeds or different blade angles, Howland says. It fails to predict not only the amount, but even the direction of changes in thrust force at higher rotation speeds or different blade angles: Whereas the theory said the force should start going down above a certain rotation speed or blade angle, experiments show the opposite — that the force continues to increase. “So, it’s not just quantitatively wrong, it’s qualitatively wrong,” Howland says.The theory also breaks down when there is any misalignment between the rotor and the airflow, which Howland says is “ubiquitous” on wind farms, where turbines are constantly adjusting to changes in wind directions. In fact, in an earlier paper in 2022, Howland and his team found that deliberately misaligning some turbines slightly relative to the incoming airflow within a wind farm significantly improves the overall power output of the wind farm by reducing wake disturbances to the downstream turbines.In the past, when designing the profile of rotor blades, the layout of wind turbines in a farm, or the day-to-day operation of wind turbines, engineers have relied on ad hoc adjustments added to the original mathematical formulas, based on some wind tunnel tests and experience with operating wind farms, but with no theoretical underpinnings.Instead, to arrive at the new model, the team analyzed the interaction of airflow and turbines using detailed computational modeling of the aerodynamics. They found that, for example, the original model had assumed that a drop in air pressure immediately behind the rotor would rapidly return to normal ambient pressure just a short way downstream. But it turns out, Howland says, that as the thrust force keeps increasing, “that assumption is increasingly inaccurate.”And the inaccuracy occurs very close to the point of the Betz limit that theoretically predicts the maximum performance of a turbine — and therefore is just the desired operating regime for the turbines. “So, we have Betz’s prediction of where we should operate turbines, and within 10 percent of that operational set point that we think maximizes power, the theory completely deteriorates and doesn’t work,” Howland says.Through their modeling, the researchers also found a way to compensate for the original formula’s reliance on a one-dimensional modeling that assumed the rotor was always precisely aligned with the airflow. To do so, they used fundamental equations that were developed to predict the lift of three-dimensional wings for aerospace applications.The researchers derived their new model, which they call a unified momentum model, based on theoretical analysis, and then validated it using computational fluid dynamics modeling. In followup work not yet published, they are doing further validation using wind tunnel and field tests.Fundamental understandingOne interesting outcome of the new formula is that it changes the calculation of the Betz limit, showing that it’s possible to extract a bit more power than the original formula predicted. Although it’s not a significant change — on the order of a few percent — “it’s interesting that now we have a new theory, and the Betz limit that’s been the rule of thumb for a hundred years is actually modified because of the new theory,” Howland says. “And that’s immediately useful.” The new model shows how to maximize power from turbines that are misaligned with the airflow, which the Betz limit cannot account for.The aspects related to controlling both individual turbines and arrays of turbines can be implemented without requiring any modifications to existing hardware in place within wind farms. In fact, this has already happened, based on earlier work from Howland and his collaborators two years ago that dealt with the wake interactions between turbines in a wind farm, and was based on the existing, empirically based formulas.“This breakthrough is a natural extension of our previous work on optimizing utility-scale wind farms,” he says, because in doing that analysis, they saw the shortcomings of the existing methods for analyzing the forces at work and predicting power produced by wind turbines. “Existing modeling using empiricism just wasn’t getting the job done,” he says.In a wind farm, individual turbines will sap some of the energy available to neighboring turbines, because of wake effects. Accurate wake modeling is important both for designing the layout of turbines in a wind farm, and also for the operation of that farm, determining moment to moment how to set the angles and speeds of each turbine in the array.Until now, Howland says, even the operators of wind farms, the manufacturers, and the designers of the turbine blades had no way to predict how much the power output of a turbine would be affected by a given change such as its angle to the wind without using empirical corrections. “That’s because there was no theory for it. So, that’s what we worked on here. Our theory can directly tell you, without any empirical corrections, for the first time, how you should actually operate a wind turbine to maximize its power,” he says.Because the fluid flow regimes are similar, the model also applies to propellers, whether for aircraft or ships, and also for hydrokinetic turbines such as tidal or river turbines. Although they didn’t focus on that aspect in this research, “it’s in the theoretical modeling naturally,” he says.The new theory exists in the form of a set of mathematical formulas that a user could incorporate in their own software, or as an open-source software package that can be freely downloaded from GitHub. “It’s an engineering model developed for fast-running tools for rapid prototyping and control and optimization,” Howland says. “The goal of our modeling is to position the field of wind energy research to move more aggressively in the development of the wind capacity and reliability necessary to respond to climate change.”The work was supported by the National Science Foundation and Siemens Gamesa Renewable Energy. More

  • in

    More durable metals for fusion power reactors

    For many decades, nuclear fusion power has been viewed as the ultimate energy source. A fusion power plant could generate carbon-free energy at a scale needed to address climate change. And it could be fueled by deuterium recovered from an essentially endless source — seawater.Decades of work and billions of dollars in research funding have yielded many advances, but challenges remain. To Ju Li, the TEPCO Professor in Nuclear Science and Engineering and a professor of materials science and engineering at MIT, there are still two big challenges. The first is to build a fusion power plant that generates more energy than is put into it; in other words, it produces a net output of power. Researchers worldwide are making progress toward meeting that goal.The second challenge that Li cites sounds straightforward: “How do we get the heat out?” But understanding the problem and finding a solution are both far from obvious.Research in the MIT Energy Initiative (MITEI) includes development and testing of advanced materials that may help address those challenges, as well as many other challenges of the energy transition. MITEI has multiple corporate members that have been supporting MIT’s efforts to advance technologies required to harness fusion energy.The problem: An abundance of helium, a destructive forceKey to a fusion reactor is a superheated plasma — an ionized gas — that’s reacting inside a vacuum vessel. As light atoms in the plasma combine to form heavier ones, they release fast neutrons with high kinetic energy that shoot through the surrounding vacuum vessel into a coolant. During this process, those fast neutrons gradually lose their energy by causing radiation damage and generating heat. The heat that’s transferred to the coolant is eventually used to raise steam that drives an electricity-generating turbine.The problem is finding a material for the vacuum vessel that remains strong enough to keep the reacting plasma and the coolant apart, while allowing the fast neutrons to pass through to the coolant. If one considers only the damage due to neutrons knocking atoms out of position in the metal structure, the vacuum vessel should last a full decade. However, depending on what materials are used in the fabrication of the vacuum vessel, some projections indicate that the vacuum vessel will last only six to 12 months. Why is that? Today’s nuclear fission reactors also generate neutrons, and those reactors last far longer than a year.The difference is that fusion neutrons possess much higher kinetic energy than fission neutrons do, and as they penetrate the vacuum vessel walls, some of them interact with the nuclei of atoms in the structural material, giving off particles that rapidly turn into helium atoms. The result is hundreds of times more helium atoms than are present in a fission reactor. Those helium atoms look for somewhere to land — a place with low “embedding energy,” a measure that indicates how much energy it takes for a helium atom to be absorbed. As Li explains, “The helium atoms like to go to places with low helium embedding energy.” And in the metals used in fusion vacuum vessels, there are places with relatively low helium embedding energy — namely, naturally occurring openings called grain boundaries.Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are gaps where the atoms don’t line up as well. That open space has relatively low helium embedding energy, so the helium atoms congregate there. Worse still, helium atoms have a repellent interaction with other atoms, so the helium atoms basically push open the grain boundary. Over time, the opening grows into a continuous crack, and the vacuum vessel breaks.That congregation of helium atoms explains why the structure fails much sooner than expected based just on the number of helium atoms that are present. Li offers an analogy to illustrate. “Babylon is a city of a million people. But the claim is that 100 bad persons can destroy the whole city — if all those bad persons work at the city hall.” The solution? Give those bad persons other, more attractive places to go, ideally in their own villages.To Li, the problem and possible solution are the same in a fusion reactor. If many helium atoms go to the grain boundary at once, they can destroy the metal wall. The solution? Add a small amount of a material that has a helium embedding energy even lower than that of the grain boundary. And over the past two years, Li and his team have demonstrated — both theoretically and experimentally — that their diversionary tactic works. By adding nanoscale particles of a carefully selected second material to the metal wall, they’ve found they can keep the helium atoms that form from congregating in the structurally vulnerable grain boundaries in the metal.Looking for helium-absorbing compoundsTo test their idea, So Yeon Kim ScD ’23 of the Department of Materials Science and Engineering and Haowei Xu PhD ’23 of the Department of Nuclear Science and Engineering acquired a sample composed of two materials, or “phases,” one with a lower helium embedding energy than the other. They and their collaborators then implanted helium ions into the sample at a temperature similar to that in a fusion reactor and watched as bubbles of helium formed. Transmission electron microscope images confirmed that the helium bubbles occurred predominantly in the phase with the lower helium embedding energy. As Li notes, “All the damage is in that phase — evidence that it protected the phase with the higher embedding energy.”Having confirmed their approach, the researchers were ready to search for helium-absorbing compounds that would work well with iron, which is often the principal metal in vacuum vessel walls. “But calculating helium embedding energy for all sorts of different materials would be computationally demanding and expensive,” says Kim. “We wanted to find a metric that is easy to compute and a reliable indicator of helium embedding energy.”They found such a metric: the “atomic-scale free volume,” which is basically the maximum size of the internal vacant space available for helium atoms to potentially settle. “This is just the radius of the largest sphere that can fit into a given crystal structure,” explains Kim. “It is a simple calculation.” Examination of a series of possible helium-absorbing ceramic materials confirmed that atomic free volume correlates well with helium embedding energy. Moreover, many of the ceramics they investigated have higher free volume, thus lower embedding energy, than the grain boundaries do.However, in order to identify options for the nuclear fusion application, the screening needed to include some other factors. For example, in addition to the atomic free volume, a good second phase must be mechanically robust (able to sustain a load); it must not get very radioactive with neutron exposure; and it must be compatible — but not too cozy — with the surrounding metal, so it disperses well but does not dissolve into the metal. “We want to disperse the ceramic phase uniformly in the bulk metal to ensure that all grain boundary regions are close to the dispersed ceramic phase so it can provide protection to those regions,” says Li. “The two phases need to coexist, so the ceramic won’t either clump together or totally dissolve in the iron.”Using their analytical tools, Kim and Xu examined about 50,000 compounds and identified 750 potential candidates. Of those, a good option for inclusion in a vacuum vessel wall made mainly of iron was iron silicate.Experimental testingThe researchers were ready to examine samples in the lab. To make the composite material for proof-of-concept demonstrations, Kim and collaborators dispersed nanoscale particles of iron silicate into iron and implanted helium into that composite material. She took X-ray diffraction (XRD) images before and after implanting the helium and also computed the XRD patterns. The ratio between the implanted helium and the dispersed iron silicate was carefully controlled to allow a direct comparison between the experimental and computed XRD patterns. The measured XRD intensity changed with the helium implantation exactly as the calculations had predicted. “That agreement confirms that atomic helium is being stored within the bulk lattice of the iron silicate,” says Kim.To follow up, Kim directly counted the number of helium bubbles in the composite. In iron samples without the iron silicate added, grain boundaries were flanked by many helium bubbles. In contrast, in the iron samples with the iron silicate ceramic phase added, helium bubbles were spread throughout the material, with many fewer occurring along the grain boundaries. Thus, the iron silicate had provided sites with low helium-embedding energy that lured the helium atoms away from the grain boundaries, protecting those vulnerable openings and preventing cracks from opening up and causing the vacuum vessel to fail catastrophically.The researchers conclude that adding just 1 percent (by volume) of iron silicate to the iron walls of the vacuum vessel will cut the number of helium bubbles in half and also reduce their diameter by 20 percent — “and having a lot of small bubbles is OK if they’re not in the grain boundaries,” explains Li.Next stepsThus far, Li and his team have gone from computational studies of the problem and a possible solution to experimental demonstrations that confirm their approach. And they’re well on their way to commercial fabrication of components. “We’ve made powders that are compatible with existing commercial 3D printers and are preloaded with helium-absorbing ceramics,” says Li. The helium-absorbing nanoparticles are well dispersed and should provide sufficient helium uptake to protect the vulnerable grain boundaries in the structural metals of the vessel walls. While Li confirms that there’s more scientific and engineering work to be done, he, along with Alexander O’Brien PhD ’23 of the Department of Nuclear Science and Engineering and Kang Pyo So, a former postdoc in the same department, have already developed a startup company that’s ready to 3D print structural materials that can meet all the challenges faced by the vacuum vessel inside a fusion reactor.This research was supported by Eni S.p.A. through the MIT Energy Initiative. Additional support was provided by a Kwajeong Scholarship; the U.S. Department of Energy (DOE) Laboratory Directed Research and Development program at Idaho National Laboratory; U.S. DOE Lawrence Livermore National Laboratory; and Creative Materials Discovery Program through the National Research Foundation of Korea. More

  • in

    MIT engineers design tiny batteries for powering cell-sized robots

    A tiny battery designed by MIT engineers could enable the deployment of cell-sized, autonomous robots for drug delivery within in the human body, as well as other applications such as locating leaks in gas pipelines.The new battery, which is 0.1 millimeters long and 0.002 millimeters thick — roughly the thickness of a human hair — can capture oxygen from air and use it to oxidize zinc, creating a current of up to 1 volt. That is enough to power a small circuit, sensor, or actuator, the researchers showed.“We think this is going to be very enabling for robotics,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study. “We’re building robotic functions onto the battery and starting to put these components together into devices.”Ge Zhang PhD ’22 and Sungyun Yang, an MIT graduate student, are the lead author of the paper, which appears in Science Robotics.Powered by batteriesFor several years, Strano’s lab has been working on tiny robots that can sense and respond to stimuli in their environment. One of the major challenges in developing such tiny robots is making sure that they have enough power.Other researchers have shown that they can power microscale devices using solar power, but the limitation to that approach is that the robots must have a laser or another light source pointed at them at all times. Such devices are known as “marionettes” because they are controlled by an external power source. Putting a power source such as a battery inside these tiny devices could free them to roam much farther.“The marionette systems don’t really need a battery because they’re getting all the energy they need from outside,” Strano says. “But if you want a small robot to be able to get into spaces that you couldn’t access otherwise, it needs to have a greater level of autonomy. A battery is essential for something that’s not going to be tethered to the outside world.”To create robots that could become more autonomous, Strano’s lab decided to use a type of battery known as a zinc-air battery. These batteries, which have a longer lifespan than many other types of batteries due to their high energy density, are often used in hearing aids.The battery that they designed consists of a zinc electrode connected to a platinum electrode, embedded into a strip of a polymer called SU-8, which is commonly used for microelectronics. When these electrodes interact with oxygen molecules from the air, the zinc becomes oxidized and releases electrons that flow to the platinum electrode, creating a current.In this study, the researchers showed that this battery could provide enough energy to power an actuator — in this case, a robotic arm that can be raised and lowered. The battery could also power a memristor, an electrical component that can store memories of events by changing its electrical resistance, and a clock circuit, which allows robotic devices to keep track of time.The battery also provides enough power to run two different types of sensors that change their electrical resistance when they encounter chemicals in the environment. One of the sensors is made from atomically thin molybdenum disulfide and the other from carbon nanotubes.“We’re making the basic building blocks in order to build up functions at the cellular level,” Strano says.Robotic swarmsIn this study, the researchers used a wire to connect their battery to an external device, but in future work they plan to build robots in which the battery is incorporated into a device.“This is going to form the core of a lot of our robotic efforts,” Strano says. “You can build a robot around an energy source, sort of like you can build an electric car around the battery.”One of those efforts revolves around designing tiny robots that could be injected into the human body, where they could seek out a target site and then release a drug such as insulin. For use in the human body, the researchers envision that the devices would be made of biocompatible materials that would break apart once they were no longer needed.The researchers are also working on increasing the voltage of the battery, which may enable additional applications.The research was funded by the U.S. Army Research Office, the U.S. Department of Energy, the National Science Foundation, and a MathWorks Engineering Fellowship. More

  • in

    Study: Rocks from Mars’ Jezero Crater, which likely predate life on Earth, contain signs of water

    In a new study appearing today in the journal AGU Advances, scientists at MIT and NASA report that seven rock samples collected along the “fan front” of Mars’ Jezero Crater contain minerals that are typically formed in water. The findings suggest that the rocks were originally deposited by water, or may have formed in the presence of water.The seven samples were collected by NASA’s Perseverance rover in 2022 during its exploration of the crater’s western slope, where some rocks were hypothesized to have formed in what is now a dried-up ancient lake. Members of the Perseverance science team, including MIT scientists, have studied the rover’s images and chemical analyses of the samples, and confirmed that the rocks indeed contain signs of water, and that the crater was likely once a watery, habitable environment.Whether the crater was actually inhabited is yet unknown. The team found that the presence of organic matter — the starting material for life — cannot be confirmed, at least based on the rover’s measurements. But judging from the rocks’ mineral content, scientists believe the samples are their best chance of finding signs of ancient Martian life once the rocks are returned to Earth for more detailed analysis.“These rocks confirm the presence, at least temporarily, of habitable environments on Mars,” says the study’s lead author, Tanja Bosak, professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What we’ve found is that indeed there was a lot of water activity. For how long, we don’t know, but certainly for long enough to create these big sedimentary deposits.”What’s more, some of the collected samples may have originally been deposited in the ancient lake more than 3.5 billion years ago — before even the first signs of life on Earth.“These are the oldest rocks that may have been deposited by water, that we’ve ever laid hands or rover arms on,” says co-author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That’s exciting, because it means these are the most promising rocks that may have preserved fossils, and signatures of life.”The study’s MIT co-authors include postdoc Eva Scheller, and research scientist Elias Mansbach, along with members of the Perseverance science team.At the front

    NASA’s Perseverance rover collected rock samples from two locations seen in this image of Mars’ Jezero Crater: “Wildcat Ridge” (lower left) and “Skinner Ridge” (upper right).

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    The new rock samples were collected in 2022 as part of the rover’s Fan Front Campaign — an exploratory phase during which Perseverance traversed Jezero Crater’s western slope, where a fan-like region contains sedimentary, layered rocks. Scientists suspect that this “fan front” is an ancient delta that was created by sediment that flowed with a river and settled into a now bone-dry lakebed. If life existed on Mars, scientists believe that it could be preserved in the layers of sediment along the fan front.In the end, Perseverance collected seven samples from various locations along the fan front. The rover obtained each sample by drilling into the Martian bedrock and extracting a pencil-sized core, which it then sealed in a tube to one day be retrieved and returned to Earth for detailed analysis.

    Composed of multiple images from NASA’s Perseverance Mars rover, this mosaic shows a rocky outcrop called “Wildcat Ridge,” where the rover extracted two rock cores and abraded a circular patch to investigate the rock’s composition.

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    Prior to extracting the cores, the rover took images of the surrounding sediments at each of the seven locations. The science team then processed the imaging data to estimate a sediment’s average grain size and mineral composition. This analysis showed that all seven collected samples likely contain signs of water, suggesting that they were initially deposited by water.Specifically, Bosak and her colleagues found evidence of certain minerals in the sediments that are known to precipitate out of water.“We found lots of minerals like carbonates, which are what make reefs on Earth,” Bosak says. “And it’s really an ideal material that can preserve fossils of microbial life.”Interestingly, the researchers also identified sulfates in some samples that were collected at the base of the fan front. Sulfates are minerals that form in very salty water — another sign that water was present in the crater at one time — though very salty water, Bosak notes, “is not necessarily the best thing for life.” If the entire crater was once filled with very salty water, then it would be difficult for any form of life to thrive. But if only the bottom of the lake were briny, that could be an advantage, at least for preserving any signs of life that may have lived further up, in less salty layers, that eventually died and drifted down to the bottom.“However salty it was, if there were any organics present, it’s like pickling something in salt,” Bosak says. “If there was life that fell into the salty layer, it would be very well-preserved.”Fuzzy fingerprintsBut the team emphasizes that organic matter has not been confidently detected by the rover’s instruments. Organic matter can be signs of life, but can also be produced by certain geological processes that have nothing to do with living matter. Perseverance’s predecessor, the Curiosity rover, had detected organic matter throughout Mars’ Gale Crater, which scientists suspect may have come from asteroids that made impact with Mars in the past.And in a previous campaign, Perseverance detected what appeared to be organic molecules at multiple locations along Jezero Crater’s floor. These observations were taken by the rover’s Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instrument, which uses ultraviolet light to scan the Martian surface. If organics are present, they can glow, similar to material under a blacklight. The wavelengths at which the material glows act as a sort of fingerprint for the kind of organic molecules that are present.In Perseverance’s previous exploration of the crater floor, SHERLOC appeared to pick up signs of organic molecules throughout the region, and later, at some locations along the fan front. But a careful analysis, led by MIT’s Eva Scheller, has found that while the particular wavelengths observed could be signs of organic matter, they could just as well be signatures of substances that have nothing to do with organic matter.“It turns out that cerium metals incorporated in minerals actually produce very similar signals as the organic matter,” Scheller says. “When investigated, the potential organic signals were strongly correlated with phosphate minerals, which always contain some cerium.”Scheller’s work shows that the rover’s measurements cannot be interpreted definitively as organic matter.“This is not bad news,” Bosak says. “It just tells us there is not very abundant organic matter. It’s still possible that it’s there. It’s just below the rover’s detection limit.”When the collected samples are finally sent back to Earth, Bosak says laboratory instruments will have more than enough sensitivity to detect any organic matter that might lie within.“On Earth, once we have microscopes with nanometer-scale resolution, and various types of instruments that we cannot staff on one rover, then we can actually attempt to look for life,” she says.This work was supported, in part, by NASA. More

  • in

    MIT School of Science launches Center for Sustainability Science and Strategy

    The MIT School of Science is launching a center to advance knowledge and computational capabilities in the field of sustainability science, and support decision-makers in government, industry, and civil society to achieve sustainable development goals. Aligned with the Climate Project at MIT, researchers at the MIT Center for Sustainability Science and Strategy will develop and apply expertise from across the Institute to improve understanding of sustainability challenges, and thereby provide actionable knowledge and insight to inform strategies for improving human well-being for current and future generations.Noelle Selin, professor at MIT’s Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences, will serve as the center’s inaugural faculty director. C. Adam Schlosser and Sergey Paltsev, senior research scientists at MIT, will serve as deputy directors, with Anne Slinn as executive director.Incorporating and succeeding both the Center for Global Change Science and Joint Program on the Science and Policy of Global Change while adding new capabilities, the center aims to produce leading-edge research to help guide societal transitions toward a more sustainable future. Drawing on the long history of MIT’s efforts to address global change and its integrated environmental and human dimensions, the center is well-positioned to lead burgeoning global efforts to advance the field of sustainability science, which seeks to understand nature-society systems in their full complexity. This understanding is designed to be relevant and actionable for decision-makers in government, industry, and civil society in their efforts to develop viable pathways to improve quality of life for multiple stakeholders.“As critical challenges such as climate, health, energy, and food security increasingly affect people’s lives around the world, decision-makers need a better understanding of the earth in its full complexity — and that includes people, technologies, and institutions as well as environmental processes,” says Selin. “Better knowledge of these systems and how they interact can lead to more effective strategies that avoid unintended consequences and ensure an improved quality of life for all.”    Advancing knowledge, computational capability, and decision supportTo produce more precise and comprehensive knowledge of sustainability challenges and guide decision-makers to formulate more effective strategies, the center has set the following goals:Advance fundamental understanding of the complex interconnected physical and socio-economic systems that affect human well-being. As new policies and technologies are developed amid climate and other global changes, they interact with environmental processes and institutions in ways that can alter the earth’s critical life-support systems. Fundamental mechanisms that determine many of these systems’ behaviors, including those related to interacting climate, water, food, and socio-economic systems, remain largely unknown and poorly quantified. Better understanding can help society mitigate the risks of abrupt changes and “tipping points” in these systems.Develop, establish and disseminate new computational tools toward better understanding earth systems, including both environmental and human dimensions. The center’s work will integrate modeling and data analysis across disciplines in an era of increasing volumes of observational data. MIT multi-system models and data products will provide robust information to inform decision-making and shape the next generation of sustainability science and strategy.Produce actionable science that supports equity and justice within and across generations. The center’s research will be designed to inform action associated with measurable outcomes aligned with supporting human well-being across generations. This requires engaging a broad range of stakeholders, including not only nations and companies, but also nongovernmental organizations and communities that take action to promote sustainable development — with special attention to those who have historically borne the brunt of environmental injustice.“The center’s work will advance fundamental understanding in sustainability science, leverage leading-edge computing and data, and promote engagement and impact,” says Selin. “Our researchers will help lead scientists and strategists across the globe who share MIT’s commitment to mobilizing knowledge to inform action toward a more sustainable world.”Building a better world at MITBuilding on existing MIT capabilities in sustainability, science, and strategy, the center aims to: focus research, education, and outreach under a theme that reflects a comprehensive state of the field and international research directions, fostering a dynamic community of students, researchers, and faculty;raise the visibility of sustainability science at MIT, emphasizing links between science and action, in the context of existing Institute goals and other efforts on climate and sustainability, and in a way that reflects the vital contributions of a range of natural and social science disciplines to understanding human-environment systems; andre-emphasize MIT’s long-standing expertise in integrated systems modeling while leveraging the Institute’s concurrent leading-edge strengths in data and computing, establishing leadership that harnesses recent innovations, including those in machine learning and artificial intelligence, toward addressing the science challenges of global change and sustainability.“The Center for Sustainability Science and Strategy will provide the necessary synergy for our MIT researchers to develop, deploy, and scale up serious solutions to climate change and other critical sustainability challenges,” says Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics and dean of the MIT School of Science. “With Professor Selin at its helm, the center will also ensure that these solutions are created in concert with the people who are directly affected now and in the future.”The center builds on more than three decades of achievements by the Center for Global Change Science and the Joint Program on the Science and Policy of Global Change, both of which were directed or co-directed by professor of atmospheric science Ronald Prinn. More

  • in

    Scientists find a human “fingerprint” in the upper troposphere’s increasing ozone

    Ozone can be an agent of good or harm, depending on where you find it in the atmosphere. Way up in the stratosphere, the colorless gas shields the Earth from the sun’s harsh ultraviolet rays. But closer to the ground, ozone is a harmful air pollutant that can trigger chronic health problems including chest pain, difficulty breathing, and impaired lung function.And somewhere in between, in the upper troposphere — the layer of the atmosphere just below the stratosphere, where most aircraft cruise — ozone contributes to warming the planet as a potent greenhouse gas.There are signs that ozone is continuing to rise in the upper troposphere despite efforts to reduce its sources at the surface in many nations. Now, MIT scientists confirm that much of ozone’s increase in the upper troposphere is likely due to humans.In a paper appearing today in the journal Environmental Science and Technology, the team reports that they detected a clear signal of human influence on upper tropospheric ozone trends in a 17-year satellite record starting in 2005.“We confirm that there’s a clear and increasing trend in upper tropospheric ozone in the northern midlatitudes due to human beings rather than climate noise,” says study lead author Xinyuan Yu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).“Now we can do more detective work and try to understand what specific human activities are leading to this ozone trend,” adds co-author Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences.The study’s MIT authors include Sebastian Eastham and Qindan Zhu, along with Benjamin Santer at the University of California at Los Angeles, Gustavo Correa of Columbia University, Jean-François Lamarque at the National Center for Atmospheric Research, and Jerald Zimeke at NASA Goddard Space Flight Center.Ozone’s tangled webUnderstanding ozone’s causes and influences is a challenging exercise. Ozone is not emitted directly, but instead is a product of “precursors” — starting ingredients, such as nitrogen oxides and volatile organic compounds (VOCs), that react in the presence of sunlight to form ozone. These precursors are generated from vehicle exhaust, power plants, chemical solvents, industrial processes, aircraft emissions, and other human-induced activities.Whether and how long ozone lingers in the atmosphere depends on a tangle of variables, including the type and extent of human activities in a given area, as well as natural climate variability. For instance, a strong El Niño year could nudge the atmosphere’s circulation in a way that affects ozone’s concentrations, regardless of how much ozone humans are contributing to the atmosphere that year.Disentangling the human- versus climate-driven causes of ozone trend, particularly in the upper troposphere, is especially tricky. Complicating matters is the fact that in the lower troposphere — the lowest layer of the atmosphere, closest to ground level — ozone has stopped rising, and has even fallen in some regions at northern midlatitudes in the last few decades. This decrease in lower tropospheric ozone is mainly a result of efforts in North America and Europe to reduce industrial sources of air pollution.“Near the surface, ozone has been observed to decrease in some regions, and its variations are more closely linked to human emissions,” Yu notes. “In the upper troposphere, the ozone trends are less well-monitored but seem to decouple with those near the surface, and ozone is more easily influenced by climate variability. So, we don’t know whether and how much of that increase in observed ozone in the upper troposphere is attributed to humans.”A human signal amid climate noiseYu and Fiore wondered whether a human “fingerprint” in ozone levels, caused directly by human activities, could be strong enough to be detectable in satellite observations in the upper troposphere. To see such a signal, the researchers would first have to know what to look for.For this, they looked to simulations of the Earth’s climate and atmospheric chemistry. Following approaches developed in climate science, they reasoned that if they could simulate a number of possible climate variations in recent decades, all with identical human-derived sources of ozone precursor emissions, but each starting with a slightly different climate condition, then any differences among these scenarios should be due to climate noise. By inference, any common signal that emerged when averaging over the simulated scenarios should be due to human-driven causes. Such a signal, then, would be a “fingerprint” revealing human-caused ozone, which the team could look for in actual satellite observations.With this strategy in mind, the team ran simulations using a state-of-the-art chemistry climate model. They ran multiple climate scenarios, each starting from the year 1950 and running through 2014.From their simulations, the team saw a clear and common signal across scenarios, which they identified as a human fingerprint. They then looked to tropospheric ozone products derived from multiple instruments aboard NASA’s Aura satellite.“Quite honestly, I thought the satellite data were just going to be too noisy,” Fiore admits. “I didn’t expect that the pattern would be robust enough.”But the satellite observations they used gave them a good enough shot. The team looked through the upper tropospheric ozone data derived from the satellite products, from the years 2005 to 2021, and found that, indeed, they could see the signal of human-caused ozone that their simulations predicted. The signal is especially pronounced over Asia, where industrial activity has risen significantly in recent decades and where abundant sunlight and frequent weather events loft pollution, including ozone and its precursors, to the upper troposphere.Yu and Fiore are now looking to identify the specific human activities that are leading to ozone’s increase in the upper troposphere.“Where is this increasing trend coming from? Is it the near-surface emissions from combusting fossil fuels in vehicle engines and power plants? Is it the aircraft that are flying in the upper troposphere? Is it the influence of wildland fires? Or some combination of all of the above?” Fiore says. “Being able to separate human-caused impacts from natural climate variations can help to inform strategies to address climate change and air pollution.”This research was funded, in part, by NASA. More