More stories

  • in

    How climate change will impact outdoor activities in the US

    It can be hard to connect a certain amount of average global warming with one’s everyday experience, so researchers at MIT have devised a different approach to quantifying the direct impact of climate change. Instead of focusing on global averages, they came up with the concept of “outdoor days”: the number days per year in a given location when the temperature is not too hot or cold to enjoy normal outdoor activities, such as going for a walk, playing sports, working in the garden, or dining outdoors.In a study published earlier this year, the researchers applied this method to compare the impact of global climate change on different countries around the world, showing that much of the global south would suffer major losses in the number of outdoor days, while some northern countries could see a slight increase. Now, they have applied the same approach to comparing the outcomes for different parts of the United States, dividing the country into nine climatic regions, and finding similar results: Some states, especially Florida and other parts of the Southeast, should see a significant drop in outdoor days, while some, especially in the Northwest, should see a slight increase.The researchers also looked at correlations between economic activity, such as tourism trends, and changing climate conditions, and examined how numbers of outdoor days could result in significant social and economic impacts. Florida’s economy, for example, is highly dependent on tourism and on people moving there for its pleasant climate; a major drop in days when it is comfortable to spend time outdoors could make the state less of a draw.The new findings were published this month in the journal Geophysical Research Letters, in a paper by researchers Yeon-Woo Choi and Muhammad Khalifa and professor of civil and environmental engineering Elfatih Eltahir.“This is something very new in our attempt to understand impacts of climate change impact, in addition to the changing extremes,” Choi says. It allows people to see how these global changes may impact them on a very personal level, as opposed to focusing on global temperature changes or on extreme events such as powerful hurricanes or increased wildfires. “To the best of my knowledge, nobody else takes this same approach” in quantifying the local impacts of climate change, he says. “I hope that many others will parallel our approach to better understand how climate may affect our daily lives.”The study looked at two different climate scenarios — one where maximum efforts are made to curb global emissions of greenhouse gases and one “worst case” scenario where little is done and global warming continues to accelerate. They used these two scenarios with every available global climate model, 32 in all, and the results were broadly consistent across all 32 models.The reality may lie somewhere in between the two extremes that were modeled, Eltahir suggests. “I don’t think we’re going to act as aggressively” as the low-emissions scenarios suggest, he says, “and we may not be as careless” as the high-emissions scenario. “Maybe the reality will emerge in the middle, toward the end of the century,” he says.The team looked at the difference in temperatures and other conditions over various ranges of decades. The data already showed some slight differences in outdoor days from the 1961-1990 period compared to 1991-2020. The researchers then compared these most recent 30 years with the last 30 years of this century, as projected by the models, and found much greater differences ahead for some regions. The strongest effects in the modeling were seen in the Southeastern states. “It seems like climate change is going to have a significant impact on the Southeast in terms of reducing the number of outdoor days,” Eltahir says, “with implications for the quality of life of the population, and also for the attractiveness of tourism and for people who want to retire there.”He adds that “surprisingly, one of the regions that would benefit a little bit is the Northwest.” But the gain there is modest: an increase of about 14 percent in outdoor days projected for the last three decades of this century, compared to the period from 1976 to 2005. The Southwestern U.S., by comparison, faces an average loss of 23 percent of their outdoor days.The study also digs into the relationship between climate and economic activity by looking at tourism trends from U.S. National Park Service visitation data, and how that aligned with differences in climate conditions. “Accounting for seasonal variations, we find a clear connection between the number of outdoor days and the number of tourist visits in the United States,” Choi says.For much of the country, there will be little overall change in the total number of annual outdoor days, the study found, but the seasonal pattern of those days could change significantly. While most parts of the country now see the most outdoor days in summertime, that will shift as summers get hotter, and spring and fall will become the preferred seasons for outdoor activity.In a way, Eltahir says, “what we are talking about that will happen in the future [for most of the country] is already happening in Florida.” There, he says, “the really enjoyable time of year is in the spring and fall, and summer is not the best time of year.”People’s level of comfort with temperatures varies somewhat among individuals and among regions, so the researchers designed a tool, now freely available online, that allows people to set their own definitions of the lowest and highest temperatures they consider suitable for outdoor activities, and then see what the climate models predict would be the change in the number of outdoor days for their location, using their own standards of comfort. For their study, they used a widely accepted range of 10 degrees Celsius (50 degrees Fahrenheit) to 25 C (77 F), which is the “thermoneutral zone” in which the human body does not require either metabolic heat generation or evaporative cooling to maintain its core temperature — in other words, in that range there is generally no need to either shiver or sweat.The model mainly focuses on temperature but also allows people to include humidity or precipitation in their definition of what constitutes a comfortable outdoor day. The model could be extended to incorporate other variables such as air quality, but the researchers say temperature tends to be the major determinant of comfort for most people.Using their software tool, “If you disagree with how we define an outdoor day, you could define one for yourself, and then you’ll see what the impacts of that are on your number of outdoor days and their seasonality,” Eltahir says.This work was inspired by the realization, he says, that “people’s understanding of climate change is based on the assumption that climate change is something that’s going to happen sometime in the future and going to happen to someone else. It’s not going to impact them directly. And I think that contributes to the fact that we are not doing enough.”Instead, the concept of outdoor days “brings the concept of climate change home, brings it to personal everyday activities,” he says. “I hope that people will find that useful to bridge that gap, and provide a better understanding and appreciation of the problem. And hopefully that would help lead to sound policies that are based on science, regarding climate change.”The research was based on work supported by the Community Jameel for Jameel Observatory CREWSnet and Abdul Latif Jameel Water and Food Systems Lab at MIT. More

  • in

    The changing geography of “energy poverty”

    A growing portion of Americans who are struggling to pay for their household energy live in the South and Southwest, reflecting a climate-driven shift away from heating needs and toward air conditioning use, an MIT study finds.The newly published research also reveals that a major U.S. federal program that provides energy subsidies to households, by assigning block grants to states, does not yet fully match these recent trends.The work evaluates the “energy burden” on households, which reflects the percentage of income needed to pay for energy necessities, from 2015 to 2020. Households with an energy burden greater than 6 percent of income are considered to be in “energy poverty.” With climate change, rising temperatures are expected to add financial stress in the South, where air conditioning is increasingly needed. Meanwhile, milder winters are expected to reduce heating costs in some colder regions.“From 2015 to 2020, there is an increase in burden generally, and you do also see this southern shift,” says Christopher Knittel, an MIT energy economist and co-author of a new paper detailing the study’s results. About federal aid, he adds, “When you compare the distribution of the energy burden to where the money is going, it’s not aligned too well.”The paper, “U.S. federal resource allocations are inconsistent with concentrations of energy poverty,” is published today in Science Advances.The authors are Carlos Batlle, a professor at Comillas University in Spain and a senior lecturer with the MIT Energy Initiative; Peter Heller SM ’24, a recent graduate of the MIT Technology and Policy Program; Knittel, the George P. Shultz Professor at the MIT Sloan School of Management and associate dean for climate and sustainability at MIT; and Tim Schittekatte, a senior lecturer at MIT Sloan.A scorching decadeThe study, which grew out of graduate research that Heller conducted at MIT, deploys a machine-learning estimation technique that the scholars applied to U.S. energy use data.Specifically, the researchers took a sample of about 20,000 households from the U.S. Energy Information Administration’s Residential Energy Consumption Survey, which includes a wide variety of demographic characteristics about residents, along with building-type and geographic information. Then, using the U.S. Census Bureau’s American Community Survey data for 2015 and 2020, the research team estimated the average household energy burden for every census tract in the lower 48 states — 73,057 in 2015, and 84,414 in 2020.That allowed the researchers to chart the changes in energy burden in recent years, including the shift toward a greater energy burden in southern states. In 2015, Maine, Mississippi, Arkansas, Vermont, and Alabama were the five states (ranked in descending order) with the highest energy burden across census bureau tracts. In 2020, that had shifted somewhat, with Maine and Vermont dropping on the list and southern states increasingly having a larger energy burden. That year, the top five states in descending order were Mississippi, Arkansas, Alabama, West Virginia, and Maine.The data also reflect a urban-rural shift. In 2015, 23 percent of the census tracts where the average household is living in energy poverty were urban. That figure shrank to 14 percent by 2020.All told, the data are consistent with the picture of a warming world, in which milder winters in the North, Northwest, and Mountain West require less heating fuel, while more extreme summer temperatures in the South require more air conditioning.“Who’s going to be harmed most from climate change?” asks Knittel. “In the U.S., not surprisingly, it’s going to be the southern part of the U.S. And our study is confirming that, but also suggesting it’s the southern part of the U.S that’s least able to respond. If you’re already burdened, the burden’s growing.”An evolution for LIHEAP?In addition to identifying the shift in energy needs during the last decade, the study also illuminates a longer-term change in U.S. household energy needs, dating back to the 1980s. The researchers compared the present-day geography of U.S. energy burden to the help currently provided by the federal Low Income Home Energy Assistance Program (LIHEAP), which dates to 1981.Federal aid for energy needs actually predates LIHEAP, but the current program was introduced in 1981, then updated in 1984 to include cooling needs such as air conditioning. When the formula was updated in 1984, two “hold harmless” clauses were also adopted, guaranteeing states a minimum amount of funding.Still, LIHEAP’s parameters also predate the rise of temperatures over the last 40 years, and the current study shows that, compared to the current landscape of energy poverty, LIHEAP distributes relatively less of its funding to southern and southwestern states.“The way Congress uses formulas set in the 1980s keeps funding distributions nearly the same as it was in the 1980s,” Heller observes. “Our paper illustrates the shift in need that has occurred over the decades since then.”Currently, it would take a fourfold increase in LIHEAP to ensure that no U.S. household experiences energy poverty. But the researchers tested out a new funding design, which would help the worst-off households first, nationally, ensuring that no household would have an energy burden of greater than 20.3 percent.“We think that’s probably the most equitable way to allocate the money, and by doing that, you now have a different amount of money that should go to each state, so that no one state is worse off than the others,” Knittel says.And while the new distribution concept would require a certain amount of subsidy reallocation among states, it would be with the goal of helping all households avoid a certain level of energy poverty, across the country, at a time of changing climate, warming weather, and shifting energy needs in the U.S.“We can optimize where we spend the money, and that optimization approach is an important thing to think about,” Knittel says.  More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More

  • in

    Bubble findings could unlock better electrode and electrolyzer designs

    Industrial electrochemical processes that use electrodes to produce fuels and chemical products are hampered by the formation of bubbles that block parts of the electrode surface, reducing the area available for the active reaction. Such blockage reduces the performance of the electrodes by anywhere from 10 to 25 percent.But new research reveals a decades-long misunderstanding about the extent of that interference. The findings show exactly how the blocking effect works and could lead to new ways of designing electrode surfaces to minimize inefficiencies in these widely used electrochemical processes.It has long been assumed that the entire area of the electrode shadowed by each bubble would be effectively inactivated. But it turns out that a much smaller area — roughly the area where the bubble actually contacts the surface — is blocked from its electrochemical activity. The new insights could lead directly to new ways of patterning the surfaces to minimize the contact area and improve overall efficiency.The findings are reported today in the journal Nanoscale, in a paper by recent MIT graduate Jack Lake PhD ’23, graduate student Simon Rufer, professor of mechanical engineering Kripa Varanasi, research scientist Ben Blaiszik, and six others at the University of Chicago and Argonne National Laboratory. The team has made available an open-source, AI-based software tool that engineers and scientists can now use to automatically recognize and quantify bubbles formed on a given surface, as a first step toward controlling the electrode material’s properties.

    Play video

    Gas-evolving electrodes, often with catalytic surfaces that promote chemical reactions, are used in a wide variety of processes, including the production of “green” hydrogen without the use of fossil fuels, carbon-capture processes that can reduce greenhouse gas emissions, aluminum production, and the chlor-alkali process that is used to make widely used chemical products.These are very widespread processes. The chlor-alkali process alone accounts for 2 percent of all U.S. electricity usage; aluminum production accounts for 3 percent of global electricity; and both carbon capture and hydrogen production are likely to grow rapidly in coming years as the world strives to meet greenhouse-gas reduction targets. So, the new findings could make a real difference, Varanasi says.“Our work demonstrates that engineering the contact and growth of bubbles on electrodes can have dramatic effects” on how bubbles form and how they leave the surface, he says. “The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes to avoid the deleterious effects of bubbles.”“The broader literature built over the last couple of decades has suggested that not only that small area of contact but the entire area under the bubble is passivated,” Rufer says. The new study reveals “a significant difference between the two models because it changes how you would develop and design an electrode to minimize these losses.”To test and demonstrate the implications of this effect, the team produced different versions of electrode surfaces with patterns of dots that nucleated and trapped bubbles at different sizes and spacings. They were able to show that surfaces with widely spaced dots promoted large bubble sizes but only tiny areas of surface contact, which helped to make clear the difference between the expected and actual effects of bubble coverage.Developing the software to detect and quantify bubble formation was necessary for the team’s analysis, Rufer explains. “We wanted to collect a lot of data and look at a lot of different electrodes and different reactions and different bubbles, and they all look slightly different,” he says. Creating a program that could deal with different materials and different lighting and reliably identify and track the bubbles was a tricky process, and machine learning was key to making it work, he says.Using that tool, he says, they were able to collect “really significant amounts of data about the bubbles on a surface, where they are, how big they are, how fast they’re growing, all these different things.” The tool is now freely available for anyone to use via the GitHub repository.By using that tool to correlate the visual measures of bubble formation and evolution with electrical measurements of the electrode’s performance, the researchers were able to disprove the accepted theory and to show that only the area of direct contact is affected. Videos further proved the point, revealing new bubbles actively evolving directly under parts of a larger bubble.The researchers developed a very general methodology that can be applied to characterize and understand the impact of bubbles on any electrode or catalyst surface. They were able to quantify the bubble passivation effects in a new performance metric they call BECSA (Bubble-induced electrochemically active surface), as opposed to ECSA (electrochemically active surface area), that is used in the field. “The BECSA metric was a concept we defined in an earlier study but did not have an effective method to estimate until this work,” says Varanasi.The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes. This means that electrode designers should seek to minimize bubble contact area rather than simply bubble coverage, which can be achieved by controlling the morphology and chemistry of the electrodes. Surfaces engineered to control bubbles can not only improve the overall efficiency of the processes and thus reduce energy use, they can also save on upfront materials costs. Many of these gas-evolving electrodes are coated with catalysts made of expensive metals like platinum or iridium, and the findings from this work can be used to engineer electrodes to reduce material wasted by reaction-blocking bubbles.Varanasi says that “the insights from this work could inspire new electrode architectures that not only reduce the usage of precious materials, but also improve the overall electrolyzer performance,” both of which would provide large-scale environmental benefits.The research team included Jim James, Nathan Pruyne, Aristana Scourtas, Marcus Schwarting, Aadit Ambalkar, Ian Foster, and Ben Blaiszik at the University of Chicago and Argonne National Laboratory. The work was supported by the U.S. Department of Energy under the ARPA-E program. More

  • in

    Solar-powered desalination system requires no extra batteries

    MIT engineers have built a new desalination system that runs with the rhythms of the sun.The solar-powered system removes salt from water at a pace that closely follows changes in solar energy. As sunlight increases through the day, the system ramps up its desalting process and automatically adjusts to any sudden variation in sunlight, for example by dialing down in response to a passing cloud or revving up as the skies clear.Because the system can quickly react to subtle changes in sunlight, it maximizes the utility of solar energy, producing large quantities of clean water despite variations in sunlight throughout the day. In contrast to other solar-driven desalination designs, the MIT system requires no extra batteries for energy storage, nor a supplemental power supply, such as from the grid.The engineers tested a community-scale prototype on groundwater wells in New Mexico over six months, working in variable weather conditions and water types. The system harnessed on average over 94 percent of the electrical energy generated from the system’s solar panels to produce up to 5,000 liters of water per day despite large swings in weather and available sunlight.“Conventional desalination technologies require steady power and need battery storage to smooth out a variable power source like solar. By continually varying power consumption in sync with the sun, our technology directly and efficiently uses solar power to make water,” says Amos Winter, the Germeshausen Professor of Mechanical Engineering and director of the K. Lisa Yang Global Engineering and Research (GEAR) Center at MIT. “Being able to make drinking water with renewables, without requiring battery storage, is a massive grand challenge. And we’ve done it.”The system is geared toward desalinating brackish groundwater — a salty source of water that is found in underground reservoirs and is more prevalent than fresh groundwater resources. The researchers see brackish groundwater as a huge untapped source of potential drinking water, particularly as reserves of fresh water are stressed in parts of the world. They envision that the new renewable, battery-free system could provide much-needed drinking water at low costs, especially for inland communities where access to seawater and grid power are limited.“The majority of the population actually lives far enough from the coast, that seawater desalination could never reach them. They consequently rely heavily on groundwater, especially in remote, low-income regions. And unfortunately, this groundwater is becoming more and more saline due to climate change,” says Jonathan Bessette, MIT PhD student in mechanical engineering. “This technology could bring sustainable, affordable clean water to underreached places around the world.”The researchers report details the new system in a paper appearing today in Nature Water. The study’s co-authors are Bessette, Winter, and staff engineer Shane Pratt.Pump and flowThe new system builds on a previous design, which Winter and his colleagues, including former MIT postdoc Wei He, reported earlier this year. That system aimed to desalinate water through “flexible batch electrodialysis.”Electrodialysis and reverse osmosis are two of the main methods used to desalinate brackish groundwater. With reverse osmosis, pressure is used to pump salty water through a membrane and filter out salts. Electrodialysis uses an electric field to draw out salt ions as water is pumped through a stack of ion-exchange membranes.Scientists have looked to power both methods with renewable sources. But this has been especially challenging for reverse osmosis systems, which traditionally run at a steady power level that’s incompatible with naturally variable energy sources such as the sun.Winter, He, and their colleagues focused on electrodialysis, seeking ways to make a more flexible, “time-variant” system that would be responsive to variations in renewable, solar power.In their previous design, the team built an electrodialysis system consisting of water pumps, an ion-exchange membrane stack, and a solar panel array. The innovation in this system was a model-based control system that used sensor readings from every part of the system to predict the optimal rate at which to pump water through the stack and the voltage that should be applied to the stack to maximize the amount of salt drawn out of the water.When the team tested this system in the field, it was able to vary its water production with the sun’s natural variations. On average, the system directly used 77 percent of the available electrical energy produced by the solar panels, which the team estimated was 91 percent more than traditionally designed solar-powered electrodialysis systems.Still, the researchers felt they could do better.“We could only calculate every three minutes, and in that time, a cloud could literally come by and block the sun,” Winter says. “The system could be saying, ‘I need to run at this high power.’ But some of that power has suddenly dropped because there’s now less sunlight. So, we had to make up that power with extra batteries.”Solar commandsIn their latest work, the researchers looked to eliminate the need for batteries, by shaving the system’s response time to a fraction of a second. The new system is able to update its desalination rate, three to five times per second. The faster response time enables the system to adjust to changes in sunlight throughout the day, without having to make up any lag in power with additional power supplies.The key to the nimbler desalting is a simpler control strategy, devised by Bessette and Pratt. The new strategy is one of “flow-commanded current control,” in which the system first senses the amount of solar power that is being produced by the system’s solar panels. If the panels are generating more power than the system is using, the controller automatically “commands” the system to dial up its pumping, pushing more water through the electrodialysis stacks. Simultaneously, the system diverts some of the additional solar power by increasing the electrical current delivered to the stack, to drive more salt out of the faster-flowing water.“Let’s say the sun is rising every few seconds,” Winter explains. “So, three times a second, we’re looking at the solar panels and saying, ‘Oh, we have more power — let’s bump up our flow rate and current a little bit.’ When we look again and see there’s still more excess power, we’ll up it again. As we do that, we’re able to closely match our consumed power with available solar power really accurately, throughout the day. And the quicker we loop this, the less battery buffering we need.”The engineers incorporated the new control strategy into a fully automated system that they sized to desalinate brackish groundwater at a daily volume that would be enough to supply a small community of about 3,000 people. They operated the system for six months on several wells at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. Throughout the trial, the prototype operated under a wide range of solar conditions, harnessing over 94 percent of the solar panel’s electrical energy, on average, to directly power desalination.“Compared to how you would traditionally design a solar desal system, we cut our required battery capacity by almost 100 percent,” Winter says.The engineers plan to further test and scale up the system in hopes of supplying larger communities, and even whole municipalities, with low-cost, fully sun-driven drinking water.“While this is a major step forward, we’re still working diligently to continue developing lower cost, more sustainable desalination methods,” Bessette says.“Our focus now is on testing, maximizing reliability, and building out a product line that can provide desalinated water using renewables to multiple markets around the world,” Pratt adds.The team will be launching a company based on their technology in the coming months.This research was supported in part by the National Science Foundation, the Julia Burke Foundation, and the MIT Morningside Academy of Design. This work was additionally supported in-kind by Veolia Water Technologies and Solutions and Xylem Goulds.  More

  • in

    Study evaluates impacts of summer heat in U.S. prison environments

    When summer temperatures spike, so does our vulnerability to heat-related illness or even death. For the most part, people can take measures to reduce their heat exposure by opening a window, turning up the air conditioning, or simply getting a glass of water. But for people who are incarcerated, freedom to take such measures is often not an option. Prison populations therefore are especially vulnerable to heat exposure, due to their conditions of confinement.A new study by MIT researchers examines summertime heat exposure in prisons across the United States and identifies characteristics within prison facilities that can further contribute to a population’s vulnerability to summer heat.The study’s authors used high-spatial-resolution air temperature data to determine the daily average outdoor temperature for each of 1,614 prisons in the U.S., for every summer between the years 1990 and 2023. They found that the prisons that are exposed to the most extreme heat are located in the southwestern U.S., while prisons with the biggest changes in summertime heat, compared to the historical record, are in the Pacific Northwest, the Northeast, and parts of the Midwest.Those findings are not entirely unique to prisons, as any non-prison facility or community in the same geographic locations would be exposed to similar outdoor air temperatures. But the team also looked at characteristics specific to prison facilities that could further exacerbate an incarcerated person’s vulnerability to heat exposure. They identified nine such facility-level characteristics, such as highly restricted movement, poor staffing, and inadequate mental health treatment. People living and working in prisons with any one of these characteristics may experience compounded risk to summertime heat. The team also looked at the demographics of 1,260 prisons in their study and found that the prisons with higher heat exposure on average also had higher proportions of non-white and Hispanic populations. The study, appearing today in the journal GeoHealth, provides policymakers and community leaders with ways to estimate, and take steps to address, a prison population’s heat risk, which they anticipate could worsen with climate change.“This isn’t a problem because of climate change. It’s becoming a worse problem because of climate change,” says study lead author Ufuoma Ovienmhada SM ’20, PhD ’24, a graduate of the MIT Media Lab, who recently completed her doctorate in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “A lot of these prisons were not built to be comfortable or humane in the first place. Climate change is just aggravating the fact that prisons are not designed to enable incarcerated populations to moderate their own exposure to environmental risk factors such as extreme heat.”The study’s co-authors include Danielle Wood, MIT associate professor of media arts and sciences, and of AeroAstro; and Brent Minchew, MIT associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences; along with Ahmed Diongue ’24, Mia Hines-Shanks of Grinnell College, and Michael Krisch of Columbia University.Environmental intersectionsThe new study is an extension of work carried out at the Media Lab, where Wood leads the Space Enabled research group. The group aims to advance social and environmental justice issues through the use of satellite data and other space-enabled technologies.The group’s motivation to look at heat exposure in prisons came in 2020 when, as co-president of MIT’s Black Graduate Student Union, Ovienmhada took part in community organizing efforts following the murder of George Floyd by Minneapolis police.“We started to do more organizing on campus around policing and reimagining public safety. Through that lens I learned more about police and prisons as interconnected systems, and came across this intersection between prisons and environmental hazards,” says Ovienmhada, who is leading an effort to map the various environmental hazards that prisons, jails, and detention centers face. “In terms of environmental hazards, extreme heat causes some of the most acute impacts for incarcerated people.”She, Wood, and their colleagues set out to use Earth observation data to characterize U.S. prison populations’ vulnerability, or their risk of experiencing negative impacts, from heat.The team first looked through a database maintained by the U.S. Department of Homeland Security that lists the location and boundaries of carceral facilities in the U.S. From the database’s more than 6,000 prisons, jails, and detention centers, the researchers highlighted 1,614 prison-specific facilities, which together incarcerate nearly 1.4 million people, and employ about 337,000 staff.They then looked to Daymet, a detailed weather and climate database that tracks daily temperatures across the United States, at a 1-kilometer resolution. For each of the 1,614 prison locations, they mapped the daily outdoor temperature, for every summer between the years 1990 to 2023, noting that the majority of current state and federal correctional facilities in the U.S. were built by 1990.The team also obtained U.S. Census data on each facility’s demographic and facility-level characteristics, such as prison labor activities and conditions of confinement. One limitation of the study that the researchers acknowledge is a lack of information regarding a prison’s climate control.“There’s no comprehensive public resource where you can look up whether a facility has air conditioning,” Ovienmhada notes. “Even in facilities with air conditioning, incarcerated people may not have regular access to those cooling systems, so our measurements of outdoor air temperature may not be far off from reality.”Heat factorsFrom their analysis, the researchers found that more than 98 percent of all prisons in the U.S. experienced at least 10 days in the summer that were hotter than every previous summer, on average, for a given location. Their analysis also revealed the most heat-exposed prisons, and the prisons that experienced the highest temperatures on average, were mostly in the Southwestern U.S. The researchers note that with the exception of New Mexico, the Southwest is a region where there are no universal air conditioning regulations in state-operated prisons.“States run their own prison systems, and there is no uniformity of data collection or policy regarding air conditioning,” says Wood, who notes that there is some information on cooling systems in some states and individual prison facilities, but the data is sparse overall, and too inconsistent to include in the group’s nationwide study.While the researchers could not incorporate air conditioning data, they did consider other facility-level factors that could worsen the effects that outdoor heat triggers. They looked through the scientific literature on heat, health impacts, and prison conditions, and focused on 17 measurable facility-level variables that contribute to heat-related health problems. These include factors such as overcrowding and understaffing.“We know that whenever you’re in a room that has a lot of people, it’s going to feel hotter, even if there’s air conditioning in that environment,” Ovienmhada says. “Also, staffing is a huge factor. Facilities that don’t have air conditioning but still try to do heat risk-mitigation procedures might rely on staff to distribute ice or water every few hours. If that facility is understaffed or has neglectful staff, that may increase people’s susceptibility to hot days.”The study found that prisons with any of nine of the 17 variables showed statistically significant greater heat exposures than the prisons without those variables. Additionally, if a prison exhibits any one of the nine variables, this could worsen people’s heat risk through the combination of elevated heat exposure and vulnerability. The variables, they say, could help state regulators and activists identify prisons to prioritize for heat interventions.“The prison population is aging, and even if you’re not in a ‘hot state,’ every state has responsibility to respond,” Wood emphasizes. “For instance, areas in the Northwest, where you might expect to be temperate overall, have experienced a number of days in recent years of increasing heat risk. A few days out of the year can still be dangerous, particularly for a population with reduced agency to regulate their own exposure to heat.”This work was supported, in part, by NASA, the MIT Media Lab, and MIT’s Institute for Data, Systems and Society’s Research Initiative on Combatting Systemic Racism. More

  • in

    New filtration material could remove long-lasting chemicals from water

    Water contamination by the chemicals used in today’s technology is a rapidly growing problem globally. A recent study by the U.S. Centers for Disease Control found that 98 percent of people tested had detectable levels of PFAS, a family of particularly long-lasting compounds also known as “forever chemicals,” in their bloodstream.A new filtration material developed by researchers at MIT might provide a nature-based solution to this stubborn contamination issue. The material, based on natural silk and cellulose, can remove a wide variety of these persistent chemicals as well as heavy metals. And, its antimicrobial properties can help keep the filters from fouling.The findings are described in the journal ACS Nano, in a paper by MIT postdoc Yilin Zhang, professor of civil and environmental engineering Benedetto Marelli, and four others from MIT.PFAS chemicals are present in a wide range of products, including cosmetics, food packaging, water-resistant clothing, firefighting foams, and antistick coating for cookware. A recent study identified 57,000 sites contaminated by these chemicals in the U.S. alone. The U.S. Environmental Protection Agency has estimated that PFAS remediation will cost $1.5 billion per year, in order to meet new regulations that call for limiting the compound to less than 7 parts per trillion in drinking water.Contamination by PFAS and similar compounds “is actually a very big deal, and current solutions may only partially resolve this problem very efficiently or economically,” Zhang says. “That’s why we came up with this protein and cellulose-based, fully natural solution,” he says.“We came to the project by chance,” Marelli notes. The initial technology that made the filtration material possible was developed by his group for a completely unrelated purpose — as a way to make a labelling system to counter the spread of counterfeit seeds, which are often of inferior quality. His team devised a way of processing silk proteins into uniform nanoscale crystals, or “nanofibrils,” through an environmentally benign, water-based drop-casting method at room temperature.Zhang suggested that their new nanofibrillar material might be effective at filtering contaminants, but initial attempts with the silk nanofibrils alone didn’t work. The team decided to try adding another material: cellulose, which is abundantly available and can be obtained from agricultural wood pulp waste. The researchers used a self-assembly method in which the silk fibroin protein is suspended in water and then templated into nanofibrils by inserting “seeds” of cellulose nanocrystals. This causes the previously disordered silk molecules to line up together along the seeds, forming the basis of a hybrid material with distinct new properties.By integrating cellulose into the silk-based fibrils that could be formed into a thin membrane, and then tuning the electrical charge of the cellulose, the researchers produced a material that was highly effective at removing contaminants in lab tests.

    By integrating cellulose into the silk-based fibrils that could be formed into a thin membrane, and then tuning the electrical charge of the cellulose, the researchers produced a material that was highly effective at removing contaminants in lab tests. Pictured is an example of the filter.

    Image: Courtesy of the researchers

    Previous item
    Next item

    The electrical charge of the cellulose, they found, also gave it strong antimicrobial properties. This is a significant advantage, since one of the primary causes of failure in filtration membranes is fouling by bacteria and fungi. The antimicrobial properties of this material should greatly reduce that fouling issue, the researchers say.“These materials can really compete with the current standard materials in water filtration when it comes to extracting metal ions and these emerging contaminants, and they can also outperform some of them currently,” Marelli says. In lab tests, the materials were able to extract orders of magnitude more of the contaminants from water than the currently used standard materials, activated carbon or granular activated carbon.While the new work serves as a proof of principle, Marelli says, the team plans to continue working on improving the material, especially in terms of durability and availability of source materials. While the silk proteins used can be available as a byproduct of the silk textile industry, if this material were to be scaled up to address the global needs for water filtration, the supply might be insufficient. Also, alternative protein materials may turn out to perform the same function at lower cost.Initially, the material would likely be used as a point-of-use filter, something that could be attached to a kitchen faucet, Zhang says. Eventually, it could be scaled up to provide filtration for municipal water supplies, but only after testing demonstrates that this would not pose any risk of introducing any contamination into the water supply. But one big advantage of the material, he says, is that both the silk and the cellulose constituents are considered food-grade substances, so any contamination is unlikely.“Most of the normal materials available today are focusing on one class of contaminants or solving single problems,” Zhang says. “I think we are among the first to address all of these simultaneously.”“What I love about this approach is that it is using only naturally grown materials like silk and cellulose to fight pollution,” says Hannes Schniepp, professor of applied science at the College of William and Mary, who was not associated with this work. “In competing approaches, synthetic materials are used — which usually require only more chemistry to fight some of the adverse outcomes that chemistry has produced. [This work] breaks this cycle! … If this can be mass-produced in an economically viable way, this could really have a major impact.”The research team included MIT postdocs Hui Sun and Meng Li, graduate student Maxwell Kalinowski, and recent graduate Yunteng Cao PhD ’22, now a postdoc at Yale University. The work was supported by the U.S. Office of Naval Research, the U.S. National Science Foundation, and the Singapore-MIT Alliance for Research and Technology. More

  • in

    Study: EV charging stations boost spending at nearby businesses

    Charging stations for electric vehicles are essential for cleaning up the transportation sector. A new study by MIT researchers suggests they’re good for business, too.The study found that, in California, opening a charging station boosted annual spending at each nearby business by an average of about $1,500 in 2019 and about $400 between January 2021 and June 2023. The spending bump amounts to thousands of extra dollars annually for nearby businesses, with the increase particularly pronounced for businesses in underresourced areas.The study’s authors hope the research paints a more holistic picture of the benefits of EV charging stations, beyond environmental factors.“These increases are equal to a significant chunk of the cost of installing an EV charger, and I hope this study sheds light on these economic benefits,” says lead author Yunhan Zheng MCP ’21, SM ’21, PhD ’24, a postdoc at the Singapore-MIT Alliance for Research and Technology (SMART). “The findings could also diversify the income stream for charger providers and site hosts, and lead to more informed business models for EV charging stations.”Zheng’s co-authors on the paper, which was published today in Nature Communications, are David Keith, a senior lecturer at the MIT Sloan School of Management; Jinhua Zhao, an MIT professor of cities and transportation; and alumni Shenhao Wang MCP ’17, SM ’17, PhD ’20 and Mi Diao MCP ’06, PhD ’10.Understanding the EV effectIncreasing the number of electric vehicle charging stations is seen as a key prerequisite for the transition to a cleaner, electrified transportation sector. As such, the 2021 U.S. Infrastructure Investment and Jobs Act committed $7.5 billion to build a national network of public electric vehicle chargers across the U.S.But a large amount of private investment will also be needed to make charging stations ubiquitous.“The U.S. is investing a lot in EV chargers and really encouraging EV adoption, but many EV charging providers can’t make enough money at this stage, and getting to profitability is a major challenge,” Zheng says.EV advocates have long argued that the presence of charging stations brings economic benefits to surrounding communities, but Zheng says previous studies on their impact relied on surveys or were small-scale. Her team of collaborators wanted to make advocates’ claims more empirical.For their study, the researchers collected data from over 4,000 charging stations in California and 140,000 businesses, relying on anonymized credit and debit card transactions to measure changes in consumer spending. The researchers used data from 2019 through June of 2023, skipping the year 2020 to minimize the impact of the pandemic.To judge whether charging stations caused customer spending increases, the researchers compared data from businesses within 500 meters of new charging stations before and after their installation. They also analyzed transactions from similar businesses in the same time frame that weren’t near charging stations.Supercharging nearby businessesThe researchers found that installing a charging station boosted annual spending at nearby establishments by an average of 1.4 percent in 2019 and 0.8 percent from January 2021 to June 2023.While that might sound like a small amount per business, it amounts to thousands of dollars in overall consumer spending increases. Specifically, those percentages translate to almost $23,000 in cumulative spending increases in 2019 and about $3,400 per year from 2021 through June 2023.Zheng says the decline in spending increases over the two time periods might be due to a saturation of EV chargers, leading to lower utilization, as well as an overall decrease in spending per business after the Covid-19 pandemic and a reduced number of businesses served by each EV charging station in the second period. Despite this decline, the annual impact of a charging station on all its surrounding businesses would still cover approximately 11.2 percent of the average infrastructure and installation cost of a standard charging station.Through both time frames, the spending increases were highest for businesses within about a football field’s distance from the new stations. They were also significant for businesses in disadvantaged and low-income areas, as designated by California and the Justice40 Initiative.“The positive impacts of EV charging stations on businesses are not constrained solely to some high-income neighborhoods,” Wang says. “It highlights the importance for policymakers to develop EV charging stations in marginalized areas, because they not only foster a cleaner environment, but also serve as a catalyst for enhancing economic vitality.”Zheng believes the findings hold a lesson for charging station developers seeking to improve the profitability of their projects.“The joint gas station and convenience store business model could also be adopted to EV charging stations,” Zheng says. “Traditionally, many gas stations are affiliated with retail store chains, which enables owners to both sell fuel and attract customers to diversify their revenue stream. EV charging providers could consider a similar approach to internalize the positive impact of EV charging stations.”Zheng also says the findings could support the creation of new funding models for charging stations, such as multiple businesses sharing the costs of construction so they can all benefit from the added spending.Those changes could accelerate the creation of charging networks, but Zheng cautions that further research is needed to understand how much the study’s findings can be extrapolated to other areas. She encourages other researchers to study the economic effects of charging stations and hopes future research includes states beyond California and even other countries.“A huge number of studies have focused on retail sales effects from traditional transportation infrastructure, such as rail and subway stations, bus stops, and street configurations,” Zhao says. “This research provides evidence for an important, emerging piece of transportation infrastructure and shows a consistently positive effect on local businesses, paving the way for future research in this area.”The research was supported, in part, by the Singapore-MIT Alliance for Research and Technology (SMART) and the Singapore National Research Foundation. Diao was partially supported by the Natural Science Foundation of Shanghai and the Fundamental Research Funds for the Central Universities of China. More