More stories

  • in

    Solar-powered desalination system requires no extra batteries

    MIT engineers have built a new desalination system that runs with the rhythms of the sun.The solar-powered system removes salt from water at a pace that closely follows changes in solar energy. As sunlight increases through the day, the system ramps up its desalting process and automatically adjusts to any sudden variation in sunlight, for example by dialing down in response to a passing cloud or revving up as the skies clear.Because the system can quickly react to subtle changes in sunlight, it maximizes the utility of solar energy, producing large quantities of clean water despite variations in sunlight throughout the day. In contrast to other solar-driven desalination designs, the MIT system requires no extra batteries for energy storage, nor a supplemental power supply, such as from the grid.The engineers tested a community-scale prototype on groundwater wells in New Mexico over six months, working in variable weather conditions and water types. The system harnessed on average over 94 percent of the electrical energy generated from the system’s solar panels to produce up to 5,000 liters of water per day despite large swings in weather and available sunlight.“Conventional desalination technologies require steady power and need battery storage to smooth out a variable power source like solar. By continually varying power consumption in sync with the sun, our technology directly and efficiently uses solar power to make water,” says Amos Winter, the Germeshausen Professor of Mechanical Engineering and director of the K. Lisa Yang Global Engineering and Research (GEAR) Center at MIT. “Being able to make drinking water with renewables, without requiring battery storage, is a massive grand challenge. And we’ve done it.”The system is geared toward desalinating brackish groundwater — a salty source of water that is found in underground reservoirs and is more prevalent than fresh groundwater resources. The researchers see brackish groundwater as a huge untapped source of potential drinking water, particularly as reserves of fresh water are stressed in parts of the world. They envision that the new renewable, battery-free system could provide much-needed drinking water at low costs, especially for inland communities where access to seawater and grid power are limited.“The majority of the population actually lives far enough from the coast, that seawater desalination could never reach them. They consequently rely heavily on groundwater, especially in remote, low-income regions. And unfortunately, this groundwater is becoming more and more saline due to climate change,” says Jonathan Bessette, MIT PhD student in mechanical engineering. “This technology could bring sustainable, affordable clean water to underreached places around the world.”The researchers report details the new system in a paper appearing today in Nature Water. The study’s co-authors are Bessette, Winter, and staff engineer Shane Pratt.Pump and flowThe new system builds on a previous design, which Winter and his colleagues, including former MIT postdoc Wei He, reported earlier this year. That system aimed to desalinate water through “flexible batch electrodialysis.”Electrodialysis and reverse osmosis are two of the main methods used to desalinate brackish groundwater. With reverse osmosis, pressure is used to pump salty water through a membrane and filter out salts. Electrodialysis uses an electric field to draw out salt ions as water is pumped through a stack of ion-exchange membranes.Scientists have looked to power both methods with renewable sources. But this has been especially challenging for reverse osmosis systems, which traditionally run at a steady power level that’s incompatible with naturally variable energy sources such as the sun.Winter, He, and their colleagues focused on electrodialysis, seeking ways to make a more flexible, “time-variant” system that would be responsive to variations in renewable, solar power.In their previous design, the team built an electrodialysis system consisting of water pumps, an ion-exchange membrane stack, and a solar panel array. The innovation in this system was a model-based control system that used sensor readings from every part of the system to predict the optimal rate at which to pump water through the stack and the voltage that should be applied to the stack to maximize the amount of salt drawn out of the water.When the team tested this system in the field, it was able to vary its water production with the sun’s natural variations. On average, the system directly used 77 percent of the available electrical energy produced by the solar panels, which the team estimated was 91 percent more than traditionally designed solar-powered electrodialysis systems.Still, the researchers felt they could do better.“We could only calculate every three minutes, and in that time, a cloud could literally come by and block the sun,” Winter says. “The system could be saying, ‘I need to run at this high power.’ But some of that power has suddenly dropped because there’s now less sunlight. So, we had to make up that power with extra batteries.”Solar commandsIn their latest work, the researchers looked to eliminate the need for batteries, by shaving the system’s response time to a fraction of a second. The new system is able to update its desalination rate, three to five times per second. The faster response time enables the system to adjust to changes in sunlight throughout the day, without having to make up any lag in power with additional power supplies.The key to the nimbler desalting is a simpler control strategy, devised by Bessette and Pratt. The new strategy is one of “flow-commanded current control,” in which the system first senses the amount of solar power that is being produced by the system’s solar panels. If the panels are generating more power than the system is using, the controller automatically “commands” the system to dial up its pumping, pushing more water through the electrodialysis stacks. Simultaneously, the system diverts some of the additional solar power by increasing the electrical current delivered to the stack, to drive more salt out of the faster-flowing water.“Let’s say the sun is rising every few seconds,” Winter explains. “So, three times a second, we’re looking at the solar panels and saying, ‘Oh, we have more power — let’s bump up our flow rate and current a little bit.’ When we look again and see there’s still more excess power, we’ll up it again. As we do that, we’re able to closely match our consumed power with available solar power really accurately, throughout the day. And the quicker we loop this, the less battery buffering we need.”The engineers incorporated the new control strategy into a fully automated system that they sized to desalinate brackish groundwater at a daily volume that would be enough to supply a small community of about 3,000 people. They operated the system for six months on several wells at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. Throughout the trial, the prototype operated under a wide range of solar conditions, harnessing over 94 percent of the solar panel’s electrical energy, on average, to directly power desalination.“Compared to how you would traditionally design a solar desal system, we cut our required battery capacity by almost 100 percent,” Winter says.The engineers plan to further test and scale up the system in hopes of supplying larger communities, and even whole municipalities, with low-cost, fully sun-driven drinking water.“While this is a major step forward, we’re still working diligently to continue developing lower cost, more sustainable desalination methods,” Bessette says.“Our focus now is on testing, maximizing reliability, and building out a product line that can provide desalinated water using renewables to multiple markets around the world,” Pratt adds.The team will be launching a company based on their technology in the coming months.This research was supported in part by the National Science Foundation, the Julia Burke Foundation, and the MIT Morningside Academy of Design. This work was additionally supported in-kind by Veolia Water Technologies and Solutions and Xylem Goulds.  More

  • in

    Study evaluates impacts of summer heat in U.S. prison environments

    When summer temperatures spike, so does our vulnerability to heat-related illness or even death. For the most part, people can take measures to reduce their heat exposure by opening a window, turning up the air conditioning, or simply getting a glass of water. But for people who are incarcerated, freedom to take such measures is often not an option. Prison populations therefore are especially vulnerable to heat exposure, due to their conditions of confinement.A new study by MIT researchers examines summertime heat exposure in prisons across the United States and identifies characteristics within prison facilities that can further contribute to a population’s vulnerability to summer heat.The study’s authors used high-spatial-resolution air temperature data to determine the daily average outdoor temperature for each of 1,614 prisons in the U.S., for every summer between the years 1990 and 2023. They found that the prisons that are exposed to the most extreme heat are located in the southwestern U.S., while prisons with the biggest changes in summertime heat, compared to the historical record, are in the Pacific Northwest, the Northeast, and parts of the Midwest.Those findings are not entirely unique to prisons, as any non-prison facility or community in the same geographic locations would be exposed to similar outdoor air temperatures. But the team also looked at characteristics specific to prison facilities that could further exacerbate an incarcerated person’s vulnerability to heat exposure. They identified nine such facility-level characteristics, such as highly restricted movement, poor staffing, and inadequate mental health treatment. People living and working in prisons with any one of these characteristics may experience compounded risk to summertime heat. The team also looked at the demographics of 1,260 prisons in their study and found that the prisons with higher heat exposure on average also had higher proportions of non-white and Hispanic populations. The study, appearing today in the journal GeoHealth, provides policymakers and community leaders with ways to estimate, and take steps to address, a prison population’s heat risk, which they anticipate could worsen with climate change.“This isn’t a problem because of climate change. It’s becoming a worse problem because of climate change,” says study lead author Ufuoma Ovienmhada SM ’20, PhD ’24, a graduate of the MIT Media Lab, who recently completed her doctorate in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “A lot of these prisons were not built to be comfortable or humane in the first place. Climate change is just aggravating the fact that prisons are not designed to enable incarcerated populations to moderate their own exposure to environmental risk factors such as extreme heat.”The study’s co-authors include Danielle Wood, MIT associate professor of media arts and sciences, and of AeroAstro; and Brent Minchew, MIT associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences; along with Ahmed Diongue ’24, Mia Hines-Shanks of Grinnell College, and Michael Krisch of Columbia University.Environmental intersectionsThe new study is an extension of work carried out at the Media Lab, where Wood leads the Space Enabled research group. The group aims to advance social and environmental justice issues through the use of satellite data and other space-enabled technologies.The group’s motivation to look at heat exposure in prisons came in 2020 when, as co-president of MIT’s Black Graduate Student Union, Ovienmhada took part in community organizing efforts following the murder of George Floyd by Minneapolis police.“We started to do more organizing on campus around policing and reimagining public safety. Through that lens I learned more about police and prisons as interconnected systems, and came across this intersection between prisons and environmental hazards,” says Ovienmhada, who is leading an effort to map the various environmental hazards that prisons, jails, and detention centers face. “In terms of environmental hazards, extreme heat causes some of the most acute impacts for incarcerated people.”She, Wood, and their colleagues set out to use Earth observation data to characterize U.S. prison populations’ vulnerability, or their risk of experiencing negative impacts, from heat.The team first looked through a database maintained by the U.S. Department of Homeland Security that lists the location and boundaries of carceral facilities in the U.S. From the database’s more than 6,000 prisons, jails, and detention centers, the researchers highlighted 1,614 prison-specific facilities, which together incarcerate nearly 1.4 million people, and employ about 337,000 staff.They then looked to Daymet, a detailed weather and climate database that tracks daily temperatures across the United States, at a 1-kilometer resolution. For each of the 1,614 prison locations, they mapped the daily outdoor temperature, for every summer between the years 1990 to 2023, noting that the majority of current state and federal correctional facilities in the U.S. were built by 1990.The team also obtained U.S. Census data on each facility’s demographic and facility-level characteristics, such as prison labor activities and conditions of confinement. One limitation of the study that the researchers acknowledge is a lack of information regarding a prison’s climate control.“There’s no comprehensive public resource where you can look up whether a facility has air conditioning,” Ovienmhada notes. “Even in facilities with air conditioning, incarcerated people may not have regular access to those cooling systems, so our measurements of outdoor air temperature may not be far off from reality.”Heat factorsFrom their analysis, the researchers found that more than 98 percent of all prisons in the U.S. experienced at least 10 days in the summer that were hotter than every previous summer, on average, for a given location. Their analysis also revealed the most heat-exposed prisons, and the prisons that experienced the highest temperatures on average, were mostly in the Southwestern U.S. The researchers note that with the exception of New Mexico, the Southwest is a region where there are no universal air conditioning regulations in state-operated prisons.“States run their own prison systems, and there is no uniformity of data collection or policy regarding air conditioning,” says Wood, who notes that there is some information on cooling systems in some states and individual prison facilities, but the data is sparse overall, and too inconsistent to include in the group’s nationwide study.While the researchers could not incorporate air conditioning data, they did consider other facility-level factors that could worsen the effects that outdoor heat triggers. They looked through the scientific literature on heat, health impacts, and prison conditions, and focused on 17 measurable facility-level variables that contribute to heat-related health problems. These include factors such as overcrowding and understaffing.“We know that whenever you’re in a room that has a lot of people, it’s going to feel hotter, even if there’s air conditioning in that environment,” Ovienmhada says. “Also, staffing is a huge factor. Facilities that don’t have air conditioning but still try to do heat risk-mitigation procedures might rely on staff to distribute ice or water every few hours. If that facility is understaffed or has neglectful staff, that may increase people’s susceptibility to hot days.”The study found that prisons with any of nine of the 17 variables showed statistically significant greater heat exposures than the prisons without those variables. Additionally, if a prison exhibits any one of the nine variables, this could worsen people’s heat risk through the combination of elevated heat exposure and vulnerability. The variables, they say, could help state regulators and activists identify prisons to prioritize for heat interventions.“The prison population is aging, and even if you’re not in a ‘hot state,’ every state has responsibility to respond,” Wood emphasizes. “For instance, areas in the Northwest, where you might expect to be temperate overall, have experienced a number of days in recent years of increasing heat risk. A few days out of the year can still be dangerous, particularly for a population with reduced agency to regulate their own exposure to heat.”This work was supported, in part, by NASA, the MIT Media Lab, and MIT’s Institute for Data, Systems and Society’s Research Initiative on Combatting Systemic Racism. More

  • in

    New filtration material could remove long-lasting chemicals from water

    Water contamination by the chemicals used in today’s technology is a rapidly growing problem globally. A recent study by the U.S. Centers for Disease Control found that 98 percent of people tested had detectable levels of PFAS, a family of particularly long-lasting compounds also known as “forever chemicals,” in their bloodstream.A new filtration material developed by researchers at MIT might provide a nature-based solution to this stubborn contamination issue. The material, based on natural silk and cellulose, can remove a wide variety of these persistent chemicals as well as heavy metals. And, its antimicrobial properties can help keep the filters from fouling.The findings are described in the journal ACS Nano, in a paper by MIT postdoc Yilin Zhang, professor of civil and environmental engineering Benedetto Marelli, and four others from MIT.PFAS chemicals are present in a wide range of products, including cosmetics, food packaging, water-resistant clothing, firefighting foams, and antistick coating for cookware. A recent study identified 57,000 sites contaminated by these chemicals in the U.S. alone. The U.S. Environmental Protection Agency has estimated that PFAS remediation will cost $1.5 billion per year, in order to meet new regulations that call for limiting the compound to less than 7 parts per trillion in drinking water.Contamination by PFAS and similar compounds “is actually a very big deal, and current solutions may only partially resolve this problem very efficiently or economically,” Zhang says. “That’s why we came up with this protein and cellulose-based, fully natural solution,” he says.“We came to the project by chance,” Marelli notes. The initial technology that made the filtration material possible was developed by his group for a completely unrelated purpose — as a way to make a labelling system to counter the spread of counterfeit seeds, which are often of inferior quality. His team devised a way of processing silk proteins into uniform nanoscale crystals, or “nanofibrils,” through an environmentally benign, water-based drop-casting method at room temperature.Zhang suggested that their new nanofibrillar material might be effective at filtering contaminants, but initial attempts with the silk nanofibrils alone didn’t work. The team decided to try adding another material: cellulose, which is abundantly available and can be obtained from agricultural wood pulp waste. The researchers used a self-assembly method in which the silk fibroin protein is suspended in water and then templated into nanofibrils by inserting “seeds” of cellulose nanocrystals. This causes the previously disordered silk molecules to line up together along the seeds, forming the basis of a hybrid material with distinct new properties.By integrating cellulose into the silk-based fibrils that could be formed into a thin membrane, and then tuning the electrical charge of the cellulose, the researchers produced a material that was highly effective at removing contaminants in lab tests.

    By integrating cellulose into the silk-based fibrils that could be formed into a thin membrane, and then tuning the electrical charge of the cellulose, the researchers produced a material that was highly effective at removing contaminants in lab tests. Pictured is an example of the filter.

    Image: Courtesy of the researchers

    Previous item
    Next item

    The electrical charge of the cellulose, they found, also gave it strong antimicrobial properties. This is a significant advantage, since one of the primary causes of failure in filtration membranes is fouling by bacteria and fungi. The antimicrobial properties of this material should greatly reduce that fouling issue, the researchers say.“These materials can really compete with the current standard materials in water filtration when it comes to extracting metal ions and these emerging contaminants, and they can also outperform some of them currently,” Marelli says. In lab tests, the materials were able to extract orders of magnitude more of the contaminants from water than the currently used standard materials, activated carbon or granular activated carbon.While the new work serves as a proof of principle, Marelli says, the team plans to continue working on improving the material, especially in terms of durability and availability of source materials. While the silk proteins used can be available as a byproduct of the silk textile industry, if this material were to be scaled up to address the global needs for water filtration, the supply might be insufficient. Also, alternative protein materials may turn out to perform the same function at lower cost.Initially, the material would likely be used as a point-of-use filter, something that could be attached to a kitchen faucet, Zhang says. Eventually, it could be scaled up to provide filtration for municipal water supplies, but only after testing demonstrates that this would not pose any risk of introducing any contamination into the water supply. But one big advantage of the material, he says, is that both the silk and the cellulose constituents are considered food-grade substances, so any contamination is unlikely.“Most of the normal materials available today are focusing on one class of contaminants or solving single problems,” Zhang says. “I think we are among the first to address all of these simultaneously.”“What I love about this approach is that it is using only naturally grown materials like silk and cellulose to fight pollution,” says Hannes Schniepp, professor of applied science at the College of William and Mary, who was not associated with this work. “In competing approaches, synthetic materials are used — which usually require only more chemistry to fight some of the adverse outcomes that chemistry has produced. [This work] breaks this cycle! … If this can be mass-produced in an economically viable way, this could really have a major impact.”The research team included MIT postdocs Hui Sun and Meng Li, graduate student Maxwell Kalinowski, and recent graduate Yunteng Cao PhD ’22, now a postdoc at Yale University. The work was supported by the U.S. Office of Naval Research, the U.S. National Science Foundation, and the Singapore-MIT Alliance for Research and Technology. More

  • in

    Study: EV charging stations boost spending at nearby businesses

    Charging stations for electric vehicles are essential for cleaning up the transportation sector. A new study by MIT researchers suggests they’re good for business, too.The study found that, in California, opening a charging station boosted annual spending at each nearby business by an average of about $1,500 in 2019 and about $400 between January 2021 and June 2023. The spending bump amounts to thousands of extra dollars annually for nearby businesses, with the increase particularly pronounced for businesses in underresourced areas.The study’s authors hope the research paints a more holistic picture of the benefits of EV charging stations, beyond environmental factors.“These increases are equal to a significant chunk of the cost of installing an EV charger, and I hope this study sheds light on these economic benefits,” says lead author Yunhan Zheng MCP ’21, SM ’21, PhD ’24, a postdoc at the Singapore-MIT Alliance for Research and Technology (SMART). “The findings could also diversify the income stream for charger providers and site hosts, and lead to more informed business models for EV charging stations.”Zheng’s co-authors on the paper, which was published today in Nature Communications, are David Keith, a senior lecturer at the MIT Sloan School of Management; Jinhua Zhao, an MIT professor of cities and transportation; and alumni Shenhao Wang MCP ’17, SM ’17, PhD ’20 and Mi Diao MCP ’06, PhD ’10.Understanding the EV effectIncreasing the number of electric vehicle charging stations is seen as a key prerequisite for the transition to a cleaner, electrified transportation sector. As such, the 2021 U.S. Infrastructure Investment and Jobs Act committed $7.5 billion to build a national network of public electric vehicle chargers across the U.S.But a large amount of private investment will also be needed to make charging stations ubiquitous.“The U.S. is investing a lot in EV chargers and really encouraging EV adoption, but many EV charging providers can’t make enough money at this stage, and getting to profitability is a major challenge,” Zheng says.EV advocates have long argued that the presence of charging stations brings economic benefits to surrounding communities, but Zheng says previous studies on their impact relied on surveys or were small-scale. Her team of collaborators wanted to make advocates’ claims more empirical.For their study, the researchers collected data from over 4,000 charging stations in California and 140,000 businesses, relying on anonymized credit and debit card transactions to measure changes in consumer spending. The researchers used data from 2019 through June of 2023, skipping the year 2020 to minimize the impact of the pandemic.To judge whether charging stations caused customer spending increases, the researchers compared data from businesses within 500 meters of new charging stations before and after their installation. They also analyzed transactions from similar businesses in the same time frame that weren’t near charging stations.Supercharging nearby businessesThe researchers found that installing a charging station boosted annual spending at nearby establishments by an average of 1.4 percent in 2019 and 0.8 percent from January 2021 to June 2023.While that might sound like a small amount per business, it amounts to thousands of dollars in overall consumer spending increases. Specifically, those percentages translate to almost $23,000 in cumulative spending increases in 2019 and about $3,400 per year from 2021 through June 2023.Zheng says the decline in spending increases over the two time periods might be due to a saturation of EV chargers, leading to lower utilization, as well as an overall decrease in spending per business after the Covid-19 pandemic and a reduced number of businesses served by each EV charging station in the second period. Despite this decline, the annual impact of a charging station on all its surrounding businesses would still cover approximately 11.2 percent of the average infrastructure and installation cost of a standard charging station.Through both time frames, the spending increases were highest for businesses within about a football field’s distance from the new stations. They were also significant for businesses in disadvantaged and low-income areas, as designated by California and the Justice40 Initiative.“The positive impacts of EV charging stations on businesses are not constrained solely to some high-income neighborhoods,” Wang says. “It highlights the importance for policymakers to develop EV charging stations in marginalized areas, because they not only foster a cleaner environment, but also serve as a catalyst for enhancing economic vitality.”Zheng believes the findings hold a lesson for charging station developers seeking to improve the profitability of their projects.“The joint gas station and convenience store business model could also be adopted to EV charging stations,” Zheng says. “Traditionally, many gas stations are affiliated with retail store chains, which enables owners to both sell fuel and attract customers to diversify their revenue stream. EV charging providers could consider a similar approach to internalize the positive impact of EV charging stations.”Zheng also says the findings could support the creation of new funding models for charging stations, such as multiple businesses sharing the costs of construction so they can all benefit from the added spending.Those changes could accelerate the creation of charging networks, but Zheng cautions that further research is needed to understand how much the study’s findings can be extrapolated to other areas. She encourages other researchers to study the economic effects of charging stations and hopes future research includes states beyond California and even other countries.“A huge number of studies have focused on retail sales effects from traditional transportation infrastructure, such as rail and subway stations, bus stops, and street configurations,” Zhao says. “This research provides evidence for an important, emerging piece of transportation infrastructure and shows a consistently positive effect on local businesses, paving the way for future research in this area.”The research was supported, in part, by the Singapore-MIT Alliance for Research and Technology (SMART) and the Singapore National Research Foundation. Diao was partially supported by the Natural Science Foundation of Shanghai and the Fundamental Research Funds for the Central Universities of China. More

  • in

    Study of disordered rock salts leads to battery breakthrough

    For the past decade, disordered rock salt has been studied as a potential breakthrough cathode material for use in lithium-ion batteries and a key to creating low-cost, high-energy storage for everything from cell phones to electric vehicles to renewable energy storage.A new MIT study is making sure the material fulfills that promise.Led by Ju Li, the Tokyo Electric Power Company Professor in Nuclear Engineering and professor of materials science and engineering, a team of researchers describe a new class of partially disordered rock salt cathode, integrated with polyanions — dubbed disordered rock salt-polyanionic spinel, or DRXPS — that delivers high energy density at high voltages with significantly improved cycling stability.“There is typically a trade-off in cathode materials between energy density and cycling stability … and with this work we aim to push the envelope by designing new cathode chemistries,” says Yimeng Huang, a postdoc in the Department of Nuclear Science and Engineering and first author of a paper describing the work published today in Nature Energy. “(This) material family has high energy density and good cycling stability because it integrates two major types of cathode materials, rock salt and polyanionic olivine, so it has the benefits of both.”Importantly, Li adds, the new material family is primarily composed of manganese, an earth-abundant element that is significantly less expensive than elements like nickel and cobalt, which are typically used in cathodes today.“Manganese is at least five times less expensive than nickel, and about 30 times less expensive than cobalt,” Li says. “Manganese is also the one of the keys to achieving higher energy densities, so having that material be much more earth-abundant is a tremendous advantage.”A possible path to renewable energy infrastructureThat advantage will be particularly critical, Li and his co-authors wrote, as the world looks to build the renewable energy infrastructure needed for a low- or no-carbon future.Batteries are a particularly important part of that picture, not only for their potential to decarbonize transportation with electric cars, buses, and trucks, but also because they will be essential to addressing the intermittency issues of wind and solar power by storing excess energy, then feeding it back into the grid at night or on calm days, when renewable generation drops.Given the high cost and relative rarity of materials like cobalt and nickel, they wrote, efforts to rapidly scale up electric storage capacity would likely lead to extreme cost spikes and potentially significant materials shortages.“If we want to have true electrification of energy generation, transportation, and more, we need earth-abundant batteries to store intermittent photovoltaic and wind power,” Li says. “I think this is one of the steps toward that dream.”That sentiment was shared by Gerbrand Ceder, the Samsung Distinguished Chair in Nanoscience and Nanotechnology Research and a professor of materials science and engineering at the University of California at Berkeley.“Lithium-ion batteries are a critical part of the clean energy transition,” Ceder says. “Their continued growth and price decrease depends on the development of inexpensive, high-performance cathode materials made from earth-abundant materials, as presented in this work.”Overcoming obstacles in existing materialsThe new study addresses one of the major challenges facing disordered rock salt cathodes — oxygen mobility.While the materials have long been recognized for offering very high capacity — as much as 350 milliampere-hour per gram — as compared to traditional cathode materials, which typically have capacities of between 190 and 200 milliampere-hour per gram, it is not very stable.The high capacity is contributed partially by oxygen redox, which is activated when the cathode is charged to high voltages. But when that happens, oxygen becomes mobile, leading to reactions with the electrolyte and degradation of the material, eventually leaving it effectively useless after prolonged cycling.To overcome those challenges, Huang added another element — phosphorus — that essentially acts like a glue, holding the oxygen in place to mitigate degradation.“The main innovation here, and the theory behind the design, is that Yimeng added just the right amount of phosphorus, formed so-called polyanions with its neighboring oxygen atoms, into a cation-deficient rock salt structure that can pin them down,” Li explains. “That allows us to basically stop the percolating oxygen transport due to strong covalent bonding between phosphorus and oxygen … meaning we can both utilize the oxygen-contributed capacity, but also have good stability as well.”That ability to charge batteries to higher voltages, Li says, is crucial because it allows for simpler systems to manage the energy they store.“You can say the quality of the energy is higher,” he says. “The higher the voltage per cell, then the less you need to connect them in series in the battery pack, and the simpler the battery management system.”Pointing the way to future studiesWhile the cathode material described in the study could have a transformative impact on lithium-ion battery technology, there are still several avenues for study going forward.Among the areas for future study, Huang says, are efforts to explore new ways to fabricate the material, particularly for morphology and scalability considerations.“Right now, we are using high-energy ball milling for mechanochemical synthesis, and … the resulting morphology is non-uniform and has small average particle size (about 150 nanometers). This method is also not quite scalable,” he says. “We are trying to achieve a more uniform morphology with larger particle sizes using some alternate synthesis methods, which would allow us to increase the volumetric energy density of the material and may allow us to explore some coating methods … which could further improve the battery performance. The future methods, of course, should be industrially scalable.”In addition, he says, the disordered rock salt material by itself is not a particularly good conductor, so significant amounts of carbon — as much as 20 weight percent of the cathode paste — were added to boost its conductivity. If the team can reduce the carbon content in the electrode without sacrificing performance, there will be higher active material content in a battery, leading to an increased practical energy density.“In this paper, we just used Super P, a typical conductive carbon consisting of nanospheres, but they’re not very efficient,” Huang says. “We are now exploring using carbon nanotubes, which could reduce the carbon content to just 1 or 2 weight percent, which could allow us to dramatically increase the amount of the active cathode material.”Aside from decreasing carbon content, making thick electrodes, he adds, is yet another way to increase the practical energy density of the battery. This is another area of research that the team is working on.“This is only the beginning of DRXPS research, since we only explored a few chemistries within its vast compositional space,” he continues. “We can play around with different ratios of lithium, manganese, phosphorus, and oxygen, and with various combinations of other polyanion-forming elements such as boron, silicon, and sulfur.”With optimized compositions, more scalable synthesis methods, better morphology that allows for uniform coatings, lower carbon content, and thicker electrodes, he says, the DRXPS cathode family is very promising in applications of electric vehicles and grid storage, and possibly even in consumer electronics, where the volumetric energy density is very important.This work was supported with funding from the Honda Research Institute USA Inc. and the Molecular Foundry at Lawrence Berkeley National Laboratory, and used resources of the National Synchrotron Light Source II at Brookhaven National Laboratory and the Advanced Photon Source at Argonne National Laboratory.  More

  • in

    Study reveals the benefits and downside of fasting

    Low-calorie diets and intermittent fasting have been shown to have numerous health benefits: They can delay the onset of some age-related diseases and lengthen lifespan, not only in humans but many other organisms.Many complex mechanisms underlie this phenomenon. Previous work from MIT has shown that one way fasting exerts its beneficial effects is by boosting the regenerative abilities of intestinal stem cells, which helps the intestine recover from injuries or inflammation.In a study of mice, MIT researchers have now identified the pathway that enables this enhanced regeneration, which is activated once the mice begin “refeeding” after the fast. They also found a downside to this regeneration: When cancerous mutations occurred during the regenerative period, the mice were more likely to develop early-stage intestinal tumors.“Having more stem cell activity is good for regeneration, but too much of a good thing over time can have less favorable consequences,” says Omer Yilmaz, an MIT associate professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the new study.Yilmaz adds that further studies are needed before forming any conclusion as to whether fasting has a similar effect in humans.“We still have a lot to learn, but it is interesting that being in either the state of fasting or refeeding when exposure to mutagen occurs can have a profound impact on the likelihood of developing a cancer in these well-defined mouse models,” he says.MIT postdocs Shinya Imada and Saleh Khawaled are the lead authors of the paper, which appears today in Nature.Driving regenerationFor several years, Yilmaz’s lab has been investigating how fasting and low-calorie diets affect intestinal health. In a 2018 study, his team reported that during a fast, intestinal stem cells begin to use lipids as an energy source, instead of carbohydrates. They also showed that fasting led to a significant boost in stem cells’ regenerative ability.However, unanswered questions remained: How does fasting trigger this boost in regenerative ability, and when does the regeneration begin?“Since that paper, we’ve really been focused on understanding what is it about fasting that drives regeneration,” Yilmaz says. “Is it fasting itself that’s driving regeneration, or eating after the fast?”In their new study, the researchers found that stem cell regeneration is suppressed during fasting but then surges during the refeeding period. The researchers followed three groups of mice — one that fasted for 24 hours, another one that fasted for 24 hours and then was allowed to eat whatever they wanted during a 24-hour refeeding period, and a control group that ate whatever they wanted throughout the experiment.The researchers analyzed intestinal stem cells’ ability to proliferate at different time points and found that the stem cells showed the highest levels of proliferation at the end of the 24-hour refeeding period. These cells were also more proliferative than intestinal stem cells from mice that had not fasted at all.“We think that fasting and refeeding represent two distinct states,” Imada says. “In the fasted state, the ability of cells to use lipids and fatty acids as an energy source enables them to survive when nutrients are low. And then it’s the postfast refeeding state that really drives the regeneration. When nutrients become available, these stem cells and progenitor cells activate programs that enable them to build cellular mass and repopulate the intestinal lining.”Further studies revealed that these cells activate a cellular signaling pathway known as mTOR, which is involved in cell growth and metabolism. One of mTOR’s roles is to regulate the translation of messenger RNA into protein, so when it’s activated, cells produce more protein. This protein synthesis is essential for stem cells to proliferate.The researchers showed that mTOR activation in these stem cells also led to production of large quantities of polyamines — small molecules that help cells to grow and divide.“In the refed state, you’ve got more proliferation, and you need to build cellular mass. That requires more protein, to build new cells, and those stem cells go on to build more differentiated cells or specialized intestinal cell types that line the intestine,” Khawaled says.Too much of a good thingThe researchers also found that when stem cells are in this highly regenerative state, they are more prone to become cancerous. Intestinal stem cells are among the most actively dividing cells in the body, as they help the lining of the intestine completely turn over every five to 10 days. Because they divide so frequently, these stem cells are the most common source of precancerous cells in the intestine.In this study, the researchers discovered that if they turned on a cancer-causing gene in the mice during the refeeding stage, they were much more likely to develop precancerous polyps than if the gene was turned on during the fasting state. Cancer-linked mutations that occurred during the refeeding state were also much more likely to produce polyps than mutations that occurred in mice that did not undergo the cycle of fasting and refeeding.“I want to emphasize that this was all done in mice, using very well-defined cancer mutations. In humans it’s going to be a much more complex state,” Yilmaz says. “But it does lead us to the following notion: Fasting is very healthy, but if you’re unlucky and you’re refeeding after a fasting, and you get exposed to a mutagen, like a charred steak or something, you might actually be increasing your chances of developing a lesion that can go on to give rise to cancer.”Yilmaz also noted that the regenerative benefits of fasting could be significant for people who undergo radiation treatment, which can damage the intestinal lining, or other types of intestinal injury. His lab is now studying whether polyamine supplements could help to stimulate this kind of regeneration, without the need to fast.“This fascinating study provides insights into the complex interplay between food consumption, stem cell biology, and cancer risk,” says Ophir Klein, a professor of medicine at the University of California at San Francisco and Cedars-Sinai Medical Center, who was not involved in the study. “Their work lays a foundation for testing polyamines as compounds that may augment intestinal repair after injuries, and it suggests that careful consideration is needed when planning diet-based strategies for regeneration to avoid increasing cancer risk.”The research was funded, in part, by a Pew-Stewart Trust Scholar award, the Marble Center for Cancer Nanomedicine, the Koch Institute-Dana Farber/Harvard Cancer Center Bridge Project, and the MIT Stem Cell Initiative. More

  • in

    MIT engineers’ new theory could improve the design and operation of wind farms

    The blades of propellers and wind turbines are designed based on aerodynamics principles that were first described mathematically more than a century ago. But engineers have long realized that these formulas don’t work in every situation. To compensate, they have added ad hoc “correction factors” based on empirical observations.Now, for the first time, engineers at MIT have developed a comprehensive, physics-based model that accurately represents the airflow around rotors even under extreme conditions, such as when the blades are operating at high forces and speeds, or are angled in certain directions. The model could improve the way rotors themselves are designed, but also the way wind farms are laid out and operated. The new findings are described today in the journal Nature Communications, in an open-access paper by MIT postdoc Jaime Liew, doctoral student Kirby Heck, and Michael Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering.“We’ve developed a new theory for the aerodynamics of rotors,” Howland says. This theory can be used to determine the forces, flow velocities, and power of a rotor, whether that rotor is extracting energy from the airflow, as in a wind turbine, or applying energy to the flow, as in a ship or airplane propeller. “The theory works in both directions,” he says.Because the new understanding is a fundamental mathematical model, some of its implications could potentially be applied right away. For example, operators of wind farms must constantly adjust a variety of parameters, including the orientation of each turbine as well as its rotation speed and the angle of its blades, in order to maximize power output while maintaining safety margins. The new model can provide a simple, speedy way of optimizing those factors in real time.“This is what we’re so excited about, is that it has immediate and direct potential for impact across the value chain of wind power,” Howland says.Modeling the momentumKnown as momentum theory, the previous model of how rotors interact with their fluid environment — air, water, or otherwise — was initially developed late in the 19th century. With this theory, engineers can start with a given rotor design and configuration, and determine the maximum amount of power that can be derived from that rotor — or, conversely, if it’s a propeller, how much power is needed to generate a given amount of propulsive force.Momentum theory equations “are the first thing you would read about in a wind energy textbook, and are the first thing that I talk about in my classes when I teach about wind power,” Howland says. From that theory, physicist Albert Betz calculated in 1920 the maximum amount of energy that could theoretically be extracted from wind. Known as the Betz limit, this amount is 59.3 percent of the kinetic energy of the incoming wind.But just a few years later, others found that the momentum theory broke down “in a pretty dramatic way” at higher forces that correspond to faster blade rotation speeds or different blade angles, Howland says. It fails to predict not only the amount, but even the direction of changes in thrust force at higher rotation speeds or different blade angles: Whereas the theory said the force should start going down above a certain rotation speed or blade angle, experiments show the opposite — that the force continues to increase. “So, it’s not just quantitatively wrong, it’s qualitatively wrong,” Howland says.The theory also breaks down when there is any misalignment between the rotor and the airflow, which Howland says is “ubiquitous” on wind farms, where turbines are constantly adjusting to changes in wind directions. In fact, in an earlier paper in 2022, Howland and his team found that deliberately misaligning some turbines slightly relative to the incoming airflow within a wind farm significantly improves the overall power output of the wind farm by reducing wake disturbances to the downstream turbines.In the past, when designing the profile of rotor blades, the layout of wind turbines in a farm, or the day-to-day operation of wind turbines, engineers have relied on ad hoc adjustments added to the original mathematical formulas, based on some wind tunnel tests and experience with operating wind farms, but with no theoretical underpinnings.Instead, to arrive at the new model, the team analyzed the interaction of airflow and turbines using detailed computational modeling of the aerodynamics. They found that, for example, the original model had assumed that a drop in air pressure immediately behind the rotor would rapidly return to normal ambient pressure just a short way downstream. But it turns out, Howland says, that as the thrust force keeps increasing, “that assumption is increasingly inaccurate.”And the inaccuracy occurs very close to the point of the Betz limit that theoretically predicts the maximum performance of a turbine — and therefore is just the desired operating regime for the turbines. “So, we have Betz’s prediction of where we should operate turbines, and within 10 percent of that operational set point that we think maximizes power, the theory completely deteriorates and doesn’t work,” Howland says.Through their modeling, the researchers also found a way to compensate for the original formula’s reliance on a one-dimensional modeling that assumed the rotor was always precisely aligned with the airflow. To do so, they used fundamental equations that were developed to predict the lift of three-dimensional wings for aerospace applications.The researchers derived their new model, which they call a unified momentum model, based on theoretical analysis, and then validated it using computational fluid dynamics modeling. In followup work not yet published, they are doing further validation using wind tunnel and field tests.Fundamental understandingOne interesting outcome of the new formula is that it changes the calculation of the Betz limit, showing that it’s possible to extract a bit more power than the original formula predicted. Although it’s not a significant change — on the order of a few percent — “it’s interesting that now we have a new theory, and the Betz limit that’s been the rule of thumb for a hundred years is actually modified because of the new theory,” Howland says. “And that’s immediately useful.” The new model shows how to maximize power from turbines that are misaligned with the airflow, which the Betz limit cannot account for.The aspects related to controlling both individual turbines and arrays of turbines can be implemented without requiring any modifications to existing hardware in place within wind farms. In fact, this has already happened, based on earlier work from Howland and his collaborators two years ago that dealt with the wake interactions between turbines in a wind farm, and was based on the existing, empirically based formulas.“This breakthrough is a natural extension of our previous work on optimizing utility-scale wind farms,” he says, because in doing that analysis, they saw the shortcomings of the existing methods for analyzing the forces at work and predicting power produced by wind turbines. “Existing modeling using empiricism just wasn’t getting the job done,” he says.In a wind farm, individual turbines will sap some of the energy available to neighboring turbines, because of wake effects. Accurate wake modeling is important both for designing the layout of turbines in a wind farm, and also for the operation of that farm, determining moment to moment how to set the angles and speeds of each turbine in the array.Until now, Howland says, even the operators of wind farms, the manufacturers, and the designers of the turbine blades had no way to predict how much the power output of a turbine would be affected by a given change such as its angle to the wind without using empirical corrections. “That’s because there was no theory for it. So, that’s what we worked on here. Our theory can directly tell you, without any empirical corrections, for the first time, how you should actually operate a wind turbine to maximize its power,” he says.Because the fluid flow regimes are similar, the model also applies to propellers, whether for aircraft or ships, and also for hydrokinetic turbines such as tidal or river turbines. Although they didn’t focus on that aspect in this research, “it’s in the theoretical modeling naturally,” he says.The new theory exists in the form of a set of mathematical formulas that a user could incorporate in their own software, or as an open-source software package that can be freely downloaded from GitHub. “It’s an engineering model developed for fast-running tools for rapid prototyping and control and optimization,” Howland says. “The goal of our modeling is to position the field of wind energy research to move more aggressively in the development of the wind capacity and reliability necessary to respond to climate change.”The work was supported by the National Science Foundation and Siemens Gamesa Renewable Energy. More

  • in

    More durable metals for fusion power reactors

    For many decades, nuclear fusion power has been viewed as the ultimate energy source. A fusion power plant could generate carbon-free energy at a scale needed to address climate change. And it could be fueled by deuterium recovered from an essentially endless source — seawater.Decades of work and billions of dollars in research funding have yielded many advances, but challenges remain. To Ju Li, the TEPCO Professor in Nuclear Science and Engineering and a professor of materials science and engineering at MIT, there are still two big challenges. The first is to build a fusion power plant that generates more energy than is put into it; in other words, it produces a net output of power. Researchers worldwide are making progress toward meeting that goal.The second challenge that Li cites sounds straightforward: “How do we get the heat out?” But understanding the problem and finding a solution are both far from obvious.Research in the MIT Energy Initiative (MITEI) includes development and testing of advanced materials that may help address those challenges, as well as many other challenges of the energy transition. MITEI has multiple corporate members that have been supporting MIT’s efforts to advance technologies required to harness fusion energy.The problem: An abundance of helium, a destructive forceKey to a fusion reactor is a superheated plasma — an ionized gas — that’s reacting inside a vacuum vessel. As light atoms in the plasma combine to form heavier ones, they release fast neutrons with high kinetic energy that shoot through the surrounding vacuum vessel into a coolant. During this process, those fast neutrons gradually lose their energy by causing radiation damage and generating heat. The heat that’s transferred to the coolant is eventually used to raise steam that drives an electricity-generating turbine.The problem is finding a material for the vacuum vessel that remains strong enough to keep the reacting plasma and the coolant apart, while allowing the fast neutrons to pass through to the coolant. If one considers only the damage due to neutrons knocking atoms out of position in the metal structure, the vacuum vessel should last a full decade. However, depending on what materials are used in the fabrication of the vacuum vessel, some projections indicate that the vacuum vessel will last only six to 12 months. Why is that? Today’s nuclear fission reactors also generate neutrons, and those reactors last far longer than a year.The difference is that fusion neutrons possess much higher kinetic energy than fission neutrons do, and as they penetrate the vacuum vessel walls, some of them interact with the nuclei of atoms in the structural material, giving off particles that rapidly turn into helium atoms. The result is hundreds of times more helium atoms than are present in a fission reactor. Those helium atoms look for somewhere to land — a place with low “embedding energy,” a measure that indicates how much energy it takes for a helium atom to be absorbed. As Li explains, “The helium atoms like to go to places with low helium embedding energy.” And in the metals used in fusion vacuum vessels, there are places with relatively low helium embedding energy — namely, naturally occurring openings called grain boundaries.Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are gaps where the atoms don’t line up as well. That open space has relatively low helium embedding energy, so the helium atoms congregate there. Worse still, helium atoms have a repellent interaction with other atoms, so the helium atoms basically push open the grain boundary. Over time, the opening grows into a continuous crack, and the vacuum vessel breaks.That congregation of helium atoms explains why the structure fails much sooner than expected based just on the number of helium atoms that are present. Li offers an analogy to illustrate. “Babylon is a city of a million people. But the claim is that 100 bad persons can destroy the whole city — if all those bad persons work at the city hall.” The solution? Give those bad persons other, more attractive places to go, ideally in their own villages.To Li, the problem and possible solution are the same in a fusion reactor. If many helium atoms go to the grain boundary at once, they can destroy the metal wall. The solution? Add a small amount of a material that has a helium embedding energy even lower than that of the grain boundary. And over the past two years, Li and his team have demonstrated — both theoretically and experimentally — that their diversionary tactic works. By adding nanoscale particles of a carefully selected second material to the metal wall, they’ve found they can keep the helium atoms that form from congregating in the structurally vulnerable grain boundaries in the metal.Looking for helium-absorbing compoundsTo test their idea, So Yeon Kim ScD ’23 of the Department of Materials Science and Engineering and Haowei Xu PhD ’23 of the Department of Nuclear Science and Engineering acquired a sample composed of two materials, or “phases,” one with a lower helium embedding energy than the other. They and their collaborators then implanted helium ions into the sample at a temperature similar to that in a fusion reactor and watched as bubbles of helium formed. Transmission electron microscope images confirmed that the helium bubbles occurred predominantly in the phase with the lower helium embedding energy. As Li notes, “All the damage is in that phase — evidence that it protected the phase with the higher embedding energy.”Having confirmed their approach, the researchers were ready to search for helium-absorbing compounds that would work well with iron, which is often the principal metal in vacuum vessel walls. “But calculating helium embedding energy for all sorts of different materials would be computationally demanding and expensive,” says Kim. “We wanted to find a metric that is easy to compute and a reliable indicator of helium embedding energy.”They found such a metric: the “atomic-scale free volume,” which is basically the maximum size of the internal vacant space available for helium atoms to potentially settle. “This is just the radius of the largest sphere that can fit into a given crystal structure,” explains Kim. “It is a simple calculation.” Examination of a series of possible helium-absorbing ceramic materials confirmed that atomic free volume correlates well with helium embedding energy. Moreover, many of the ceramics they investigated have higher free volume, thus lower embedding energy, than the grain boundaries do.However, in order to identify options for the nuclear fusion application, the screening needed to include some other factors. For example, in addition to the atomic free volume, a good second phase must be mechanically robust (able to sustain a load); it must not get very radioactive with neutron exposure; and it must be compatible — but not too cozy — with the surrounding metal, so it disperses well but does not dissolve into the metal. “We want to disperse the ceramic phase uniformly in the bulk metal to ensure that all grain boundary regions are close to the dispersed ceramic phase so it can provide protection to those regions,” says Li. “The two phases need to coexist, so the ceramic won’t either clump together or totally dissolve in the iron.”Using their analytical tools, Kim and Xu examined about 50,000 compounds and identified 750 potential candidates. Of those, a good option for inclusion in a vacuum vessel wall made mainly of iron was iron silicate.Experimental testingThe researchers were ready to examine samples in the lab. To make the composite material for proof-of-concept demonstrations, Kim and collaborators dispersed nanoscale particles of iron silicate into iron and implanted helium into that composite material. She took X-ray diffraction (XRD) images before and after implanting the helium and also computed the XRD patterns. The ratio between the implanted helium and the dispersed iron silicate was carefully controlled to allow a direct comparison between the experimental and computed XRD patterns. The measured XRD intensity changed with the helium implantation exactly as the calculations had predicted. “That agreement confirms that atomic helium is being stored within the bulk lattice of the iron silicate,” says Kim.To follow up, Kim directly counted the number of helium bubbles in the composite. In iron samples without the iron silicate added, grain boundaries were flanked by many helium bubbles. In contrast, in the iron samples with the iron silicate ceramic phase added, helium bubbles were spread throughout the material, with many fewer occurring along the grain boundaries. Thus, the iron silicate had provided sites with low helium-embedding energy that lured the helium atoms away from the grain boundaries, protecting those vulnerable openings and preventing cracks from opening up and causing the vacuum vessel to fail catastrophically.The researchers conclude that adding just 1 percent (by volume) of iron silicate to the iron walls of the vacuum vessel will cut the number of helium bubbles in half and also reduce their diameter by 20 percent — “and having a lot of small bubbles is OK if they’re not in the grain boundaries,” explains Li.Next stepsThus far, Li and his team have gone from computational studies of the problem and a possible solution to experimental demonstrations that confirm their approach. And they’re well on their way to commercial fabrication of components. “We’ve made powders that are compatible with existing commercial 3D printers and are preloaded with helium-absorbing ceramics,” says Li. The helium-absorbing nanoparticles are well dispersed and should provide sufficient helium uptake to protect the vulnerable grain boundaries in the structural metals of the vessel walls. While Li confirms that there’s more scientific and engineering work to be done, he, along with Alexander O’Brien PhD ’23 of the Department of Nuclear Science and Engineering and Kang Pyo So, a former postdoc in the same department, have already developed a startup company that’s ready to 3D print structural materials that can meet all the challenges faced by the vacuum vessel inside a fusion reactor.This research was supported by Eni S.p.A. through the MIT Energy Initiative. Additional support was provided by a Kwajeong Scholarship; the U.S. Department of Energy (DOE) Laboratory Directed Research and Development program at Idaho National Laboratory; U.S. DOE Lawrence Livermore National Laboratory; and Creative Materials Discovery Program through the National Research Foundation of Korea. More