More stories

  • in

    Minimizing the carbon footprint of bridges and other structures

    Awed as a young child by the majesty of the Golden Gate Bridge in San Francisco, civil engineer and MIT Morningside Academy for Design (MAD) Fellow Zane Schemmer has retained his fascination with bridges: what they look like, why they work, and how they’re designed and built.He weighed the choice between architecture and engineering when heading off to college, but, motivated by the why and how of structural engineering, selected the latter. Now he incorporates design as an iterative process in the writing of algorithms that perfectly balance the forces involved in discrete portions of a structure to create an overall design that optimizes function, minimizes carbon footprint, and still produces a manufacturable result.While this may sound like an obvious goal in structural design, it’s not. It’s new. It’s a more holistic way of looking at the design process that can optimize even down to the materials, angles, and number of elements in the nodes or joints that connect the larger components of a building, bridge, tower, etc.According to Schemmer, there hasn’t been much progress on optimizing structural design to minimize embodied carbon, and the work that exists often results in designs that are “too complex to be built in real life,” he says. The embodied carbon of a structure is the total carbon dioxide emissions of its life cycle: from the extraction or manufacture of its materials to their transport and use and through the demolition of the structure and disposal of the materials. Schemmer, who works with Josephine V. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering at MIT, is focusing on the portion of that cycle that runs through construction.In September, at the IASS 2024 symposium “Redefining the Art of Structural Design in Zurich,” Schemmer and Carstensen presented their work on Discrete Topology Optimization algorithms that are able to minimize the embodied carbon in a bridge or other structure by up to 20 percent. This comes through materials selection that considers not only a material’s appearance and its ability to get the job done, but also the ease of procurement, its proximity to the building site, and the carbon embodied in its manufacture and transport.“The real novelty of our algorithm is its ability to consider multiple materials in a highly constrained solution space to produce manufacturable designs with a user-specified force flow,” Schemmer says. “Real-life problems are complex and often have many constraints associated with them. In traditional formulations, it can be difficult to have a long list of complicated constraints. Our goal is to incorporate these constraints to make it easier to take our designs out of the computer and create them in real life.”Take, for instance, a steel tower, which could be a “super lightweight, efficient design solution,” Schemmer explains. Because steel is so strong, you don’t need as much of it compared to concrete or timber to build a big building. But steel is also very carbon-intensive to produce and transport. Shipping it across the country or especially from a different continent can sharply increase its embodied carbon price tag. Schemmer’s topology optimization will replace some of the steel with timber elements or decrease the amount of steel in other elements to create a hybrid structure that will function effectively and minimize the carbon footprint. “This is why using the same steel in two different parts of the world can lead to two different optimized designs,” he explains.Schemmer, who grew up in the mountains of Utah, earned a BS and MS in civil and environmental engineering from University of California at Berkeley, where his graduate work focused on seismic design. He describes that education as providing a “very traditional, super-strong engineering background that tackled some of the toughest engineering problems,” along with knowledge of structural engineering’s traditions and current methods.But at MIT, he says, a lot of the work he sees “looks at removing the constraints of current societal conventions of doing things, and asks how could we do things if it was in a more ideal form; what are we looking at then? Which I think is really cool,” he says. “But I think sometimes too, there’s a jump between the most-perfect version of something and where we are now, that there needs to be a bridge between those two. And I feel like my education helps me see that bridge.”The bridge he’s referring to is the topology optimization algorithms that make good designs better in terms of decreased global warming potential.“That’s where the optimization algorithm comes in,” Schemmer says. “In contrast to a standard structure designed in the past, the algorithm can take the same design space and come up with a much more efficient material usage that still meets all the structural requirements, be up to code, and have everything we want from a safety standpoint.”That’s also where the MAD Design Fellowship comes in. The program provides yearlong fellowships with full financial support to graduate students from all across the Institute who network with each other, with the MAD faculty, and with outside speakers who use design in new ways in a surprising variety of fields. This helps the fellows gain a better understanding of how to use iterative design in their own work.“Usually people think of their own work like, ‘Oh, I had this background. I’ve been looking at this one way for a very long time.’ And when you look at it from an outside perspective, I think it opens your mind to be like, ‘Oh my God. I never would have thought about doing this that way. Maybe I should try that.’ And then we can move to new ideas, new inspiration for better work,” Schemmer says.He chose civil and structural engineering over architecture some seven years ago, but says that “100 years ago, I don’t think architecture and structural engineering were two separate professions. I think there was an understanding of how things looked and how things worked, and it was merged together. Maybe from an efficiency standpoint, it’s better to have things done separately. But I think there’s something to be said for having knowledge about how the whole system works, potentially more intermingling between the free-form architectural design and the mathematical design of a civil engineer. Merging it back together, I think, has a lot of benefits.”Which brings us back to the Golden Gate Bridge, Schemmer’s longtime favorite. You can still hear that excited 3-year-old in his voice when he talks about it.“It’s so iconic,” he says. “It’s connecting these two spits of land that just rise straight up out of the ocean. There’s this fog that comes in and out a lot of days. It’s a really magical place, from the size of the cable strands and everything. It’s just, ‘Wow.’ People built this over 100 years ago, before the existence of a lot of the computational tools that we have now. So, all the math, everything in the design, was all done by hand and from the mind. Nothing was computerized, which I think is crazy to think about.”As Schemmer continues work on his doctoral degree at MIT, the MAD fellowship will expose him to many more awe-inspiring ideas in other fields, leading him to incorporate some of these in some way with his engineering knowledge to design better ways of building bridges and other structures. More

  • in

    An abundant phytoplankton feeds a global network of marine microbes

    One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).Spotting castawaysCross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?Global symphonyIn their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”This work was supported, in part, by the Simons Foundation and the National Science Foundation. More

  • in

    A nonflammable battery to power a safer, decarbonized future

    Lithium-ion batteries are the workhorses of home electronics and are powering an electric revolution in transportation. But they are not suitable for every application.A key drawback is their flammability and toxicity, which make large-scale lithium-ion energy storage a bad fit in densely populated city centers and near metal processing or chemical manufacturing plants.Now Alsym Energy has developed a nonflammable, nontoxic alternative to lithium-ion batteries to help renewables like wind and solar bridge the gap in a broader range of sectors. The company’s electrodes use relatively stable, abundant materials, and its electrolyte is primarily water with some nontoxic add-ons.“Renewables are intermittent, so you need storage, and to really solve the decarbonization problem, we need to be able to make these batteries anywhere at low cost,” says Alsym co-founder and MIT Professor Kripa Varanasi.The company believes its batteries, which are currently being tested by potential customers around the world, hold enormous potential to decarbonize the high-emissions industrial manufacturing sector, and they see other applications ranging from mining to powering data centers, homes, and utilities.“We are enabling a decarbonization of markets that was not possible before,” Alsym co-founder and CEO Mukesh Chatter says. “No chemical or steel plant would dare put a lithium battery close to their premises because of the flammability, and industrial emissions are a much bigger problem than passenger cars. With this approach, we’re able to offer a new path.”Helping 1 billion peopleChatter started a telecommunications company with serial entrepreneurs and longtime members of the MIT community Ray Stata ’57, SM ’58 and Alec Dingee ’52 in 1997. Since the company was acquired in 1999, Chatter and his wife have started other ventures and invested in some startups, but after losing his mother to cancer in 2012, Chatter decided he wanted to maximize his impact by only working on technologies that could reach 1 billion people or more.The problem Chatter decided to focus on was electricity access.“The intent was to light up the homes of at least 1 billion people around the world who either did not have electricity, or only got it part of the time, condemning them basically to a life of poverty in the 19th century,” Chatter says. “When you don’t have access to electricity, you also don’t have the internet, cell phones, education, etc.”To solve the problem, Chatter decided to fund research into a new kind of battery. The battery had to be cheap enough to be adopted in low-resource settings, safe enough to be deployed in crowded areas, and work well enough to support two light bulbs, a fan, a refrigerator, and an internet modem.At first, Chatter was surprised how few takers he had to start the research, even from researchers at the top universities in the world.“It’s a burning problem, but the risk of failure was so high that nobody wanted to take the chance,” Chatter recalls.He finally found his partners in Varanasi, Rensselaer Polytechnic Institute Professor Nikhil Koratkar and Rensselaer researcher Rahul Mukherjee. Varanasi, who notes he’s been at MIT for 22 years, says the Institute’s culture gave him the confidence to tackle big problems.“My students, postdocs, and colleagues are inspirational to me,” he says. “The MIT ecosystem infuses us with this resolve to go after problems that look insurmountable.”Varanasi leads an interdisciplinary lab at MIT dedicated to understanding physicochemical and biological phenomena. His research has spurred the creation of materials, devices, products, and processes to tackle challenges in energy, agriculture, and other sectors, as well as startup companies to commercialize this work.“Working at the interfaces of matter has unlocked numerous new research pathways across various fields, and MIT has provided me the creative freedom to explore, discover, and learn, and apply that knowledge to solve critical challenges,” he says. “I was able to draw significantly from my learnings as we set out to develop the new battery technology.”Alsym’s founding team began by trying to design a battery from scratch based on new materials that could fit the parameters defined by Chatter. To make it nonflammable and nontoxic, the founders wanted to avoid lithium and cobalt.After evaluating many different chemistries, the founders settled on Alsym’s current approach, which was finalized in 2020.Although the full makeup of Alsym’s battery is still under wraps as the company waits to be granted patents, one of Alsym’s electrodes is made mostly of manganese oxide while the other is primarily made of a metal oxide. The electrolyte is primarily water.There are several advantages to Alsym’s new battery chemistry. Because the battery is inherently safer and more sustainable than lithium-ion, the company doesn’t need the same safety protections or cooling equipment, and it can pack its batteries close to each other without fear of fires or explosions. Varanasi also says the battery can be manufactured in any of today’s lithium-ion plants with minimal changes and at significantly lower operating cost.“We are very excited right now,” Chatter says. “We started out wanting to light up 1 billion people’s homes, and now in addition to the original goal we have a chance to impact the entire globe if we are successful at cutting back industrial emissions.”A new platform for energy storageAlthough the batteries don’t quite reach the energy density of lithium-ion batteries, Varanasi says Alsym is first among alternative chemistries at the system-level. He says 20-foot containers of Alsym’s batteries can provide 1.7 megawatt hours of electricity. The batteries can also fast-charge over four hours and can be configured to discharge over anywhere from two to 110 hours.“We’re highly configurable, and that’s important because depending on where you are, you can sometimes run on two cycles a day with solar, and in combination with wind, you could truly get 24/7 electricity,” Chatter says. “The need to do multiday or long duration storage is a small part of the market, but we support that too.”Alsym has been manufacturing prototypes at a small facility in Woburn, Massachusetts, for the last two years, and early this year it expanded its capacity and began to send samples to customers for field testing.In addition to large utilities, the company is working with municipalities, generator manufacturers, and providers of behind-the-meter power for residential and commercial buildings. The company is also in discussion with a large chemical manufacturers and metal processing plants to provide energy storage system to reduce their carbon footprint, something they say was not feasible with lithium-ion batteries, due to their flammability, or with nonlithium batteries, due to their large space requirements.Another critical area is data centers. With the growth of AI, the demand for data centers — and their energy consumption — is set to surge.“We must power the AI and digitization revolution without compromising our planet,” says Varanasi, adding that lithium batteries are unsuitable for co-location with data centers due to flammability risks. “Alsym batteries are well-positioned to offer a safer, more sustainable alternative. Intermittency is also a key issue for electrolyzers used in green hydrogen production and other markets.”Varanasi sees Alsym as a platform company, and Chatter says Alsym is already working on other battery chemistries that have higher densities and maintain performance at even more extreme temperatures.“When you use a single material in any battery, and the whole world starts to use it, you run out of that material,” Varanasi says. “What we have is a platform that has enabled us to not just to come up with just one chemistry, but at least three or four chemistries targeted at different applications so no one particular set of materials will be stressed in terms of supply.” More

  • in

    MIT engineers make converting CO2 into useful products more practical

    As the world struggles to reduce greenhouse gas emissions, researchers are seeking practical, economical ways to capture carbon dioxide and convert it into useful products, such as transportation fuels, chemical feedstocks, or even building materials. But so far, such attempts have struggled to reach economic viability.New research by engineers at MIT could lead to rapid improvements in a variety of electrochemical systems that are under development to convert carbon dioxide into a valuable commodity. The team developed a new design for the electrodes used in these systems, which increases the efficiency of the conversion process.The findings are reported today in the journal Nature Communications, in a paper by MIT doctoral student Simon Rufer, professor of mechanical engineering Kripa Varanasi, and three others.“The CO2 problem is a big challenge for our times, and we are using all kinds of levers to solve and address this problem,” Varanasi says. It will be essential to find practical ways of removing the gas, he says, either from sources such as power plant emissions, or straight out of the air or the oceans. But then, once the CO2 has been removed, it has to go somewhere.A wide variety of systems have been developed for converting that captured gas into a useful chemical product, Varanasi says. “It’s not that we can’t do it — we can do it. But the question is how can we make this efficient? How can we make this cost-effective?”In the new study, the team focused on the electrochemical conversion of CO2 to ethylene, a widely used chemical that can be made into a variety of plastics as well as fuels, and which today is made from petroleum. But the approach they developed could also be applied to producing other high-value chemical products as well, including methane, methanol, carbon monoxide, and others, the researchers say.Currently, ethylene sells for about $1,000 per ton, so the goal is to be able to meet or beat that price. The electrochemical process that converts CO2 into ethylene involves a water-based solution and a catalyst material, which come into contact along with an electric current in a device called a gas diffusion electrode.There are two competing characteristics of the gas diffusion electrode materials that affect their performance: They must be good electrical conductors so that the current that drives the process doesn’t get wasted through resistance heating, but they must also be “hydrophobic,” or water repelling, so the water-based electrolyte solution doesn’t leak through and interfere with the reactions taking place at the electrode surface.Unfortunately, it’s a tradeoff. Improving the conductivity reduces the hydrophobicity, and vice versa. Varanasi and his team set out to see if they could find a way around that conflict, and after many months of trying, they did just that.The solution, devised by Rufer and Varanasi, is elegant in its simplicity. They used a plastic material, PTFE (essentially Teflon), that has been known to have good hydrophobic properties. However, PTFE’s lack of conductivity means that electrons must travel through a very thin catalyst layer, leading to significant voltage drop with distance. To overcome this limitation, the researchers wove a series of conductive copper wires through the very thin sheet of the PTFE.“This work really addressed this challenge, as we can now get both conductivity and hydrophobicity,” Varanasi says.Research on potential carbon conversion systems tends to be done on very small, lab-scale samples, typically less than 1-inch (2.5-centimeter) squares. To demonstrate the potential for scaling up, Varanasi’s team produced a sheet 10 times larger in area and demonstrated its effective performance.To get to that point, they had to do some basic tests that had apparently never been done before, running tests under identical conditions but using electrodes of different sizes to analyze the relationship between conductivity and electrode size. They found that conductivity dropped off dramatically with size, which would mean much more energy, and thus cost, would be needed to drive the reaction.“That’s exactly what we would expect, but it was something that nobody had really dedicatedly investigated before,” Rufer says. In addition, the larger sizes produced more unwanted chemical byproducts besides the intended ethylene.Real-world industrial applications would require electrodes that are perhaps 100 times larger than the lab versions, so adding the conductive wires will be necessary for making such systems practical, the researchers say. They also developed a model which captures the spatial variability in voltage and product distribution on electrodes due to ohmic losses. The model along with the experimental data they collected enabled them to calculate the optimal spacing for conductive wires to counteract the drop off in conductivity.In effect, by weaving the wire through the material, the material is divided into smaller subsections determined by the spacing of the wires. “We split it into a bunch of little subsegments, each of which is effectively a smaller electrode,” Rufer says. “And as we’ve seen, small electrodes can work really well.”Because the copper wire is so much more conductive than the PTFE material, it acts as a kind of superhighway for electrons passing through, bridging the areas where they are confined to the substrate and face greater resistance.To demonstrate that their system is robust, the researchers ran a test electrode for 75 hours continuously, with little change in performance. Overall, Rufer says, their system “is the first PTFE-based electrode which has gone beyond the lab scale on the order of 5 centimeters or smaller. It’s the first work that has progressed into a much larger scale and has done so without sacrificing efficiency.”The weaving process for incorporating the wire can be easily integrated into existing manufacturing processes, even in a large-scale roll-to-roll process, he adds.“Our approach is very powerful because it doesn’t have anything to do with the actual catalyst being used,” Rufer says. “You can sew this micrometric copper wire into any gas diffusion electrode you want, independent of catalyst morphology or chemistry. So, this approach can be used to scale anybody’s electrode.”“Given that we will need to process gigatons of CO2 annually to combat the CO2 challenge, we really need to think about solutions that can scale,” Varanasi says. “Starting with this mindset enables us to identify critical bottlenecks and develop innovative approaches that can make a meaningful impact in solving the problem. Our hierarchically conductive electrode is a result of such thinking.”The research team included MIT graduate students Michael Nitzsche and Sanjay Garimella,  as well as Jack Lake PhD ’23. The work was supported by Shell, through the MIT Energy Initiative. More

  • in

    How climate change will impact outdoor activities in the US

    It can be hard to connect a certain amount of average global warming with one’s everyday experience, so researchers at MIT have devised a different approach to quantifying the direct impact of climate change. Instead of focusing on global averages, they came up with the concept of “outdoor days”: the number days per year in a given location when the temperature is not too hot or cold to enjoy normal outdoor activities, such as going for a walk, playing sports, working in the garden, or dining outdoors.In a study published earlier this year, the researchers applied this method to compare the impact of global climate change on different countries around the world, showing that much of the global south would suffer major losses in the number of outdoor days, while some northern countries could see a slight increase. Now, they have applied the same approach to comparing the outcomes for different parts of the United States, dividing the country into nine climatic regions, and finding similar results: Some states, especially Florida and other parts of the Southeast, should see a significant drop in outdoor days, while some, especially in the Northwest, should see a slight increase.The researchers also looked at correlations between economic activity, such as tourism trends, and changing climate conditions, and examined how numbers of outdoor days could result in significant social and economic impacts. Florida’s economy, for example, is highly dependent on tourism and on people moving there for its pleasant climate; a major drop in days when it is comfortable to spend time outdoors could make the state less of a draw.The new findings were published this month in the journal Geophysical Research Letters, in a paper by researchers Yeon-Woo Choi and Muhammad Khalifa and professor of civil and environmental engineering Elfatih Eltahir.“This is something very new in our attempt to understand impacts of climate change impact, in addition to the changing extremes,” Choi says. It allows people to see how these global changes may impact them on a very personal level, as opposed to focusing on global temperature changes or on extreme events such as powerful hurricanes or increased wildfires. “To the best of my knowledge, nobody else takes this same approach” in quantifying the local impacts of climate change, he says. “I hope that many others will parallel our approach to better understand how climate may affect our daily lives.”The study looked at two different climate scenarios — one where maximum efforts are made to curb global emissions of greenhouse gases and one “worst case” scenario where little is done and global warming continues to accelerate. They used these two scenarios with every available global climate model, 32 in all, and the results were broadly consistent across all 32 models.The reality may lie somewhere in between the two extremes that were modeled, Eltahir suggests. “I don’t think we’re going to act as aggressively” as the low-emissions scenarios suggest, he says, “and we may not be as careless” as the high-emissions scenario. “Maybe the reality will emerge in the middle, toward the end of the century,” he says.The team looked at the difference in temperatures and other conditions over various ranges of decades. The data already showed some slight differences in outdoor days from the 1961-1990 period compared to 1991-2020. The researchers then compared these most recent 30 years with the last 30 years of this century, as projected by the models, and found much greater differences ahead for some regions. The strongest effects in the modeling were seen in the Southeastern states. “It seems like climate change is going to have a significant impact on the Southeast in terms of reducing the number of outdoor days,” Eltahir says, “with implications for the quality of life of the population, and also for the attractiveness of tourism and for people who want to retire there.”He adds that “surprisingly, one of the regions that would benefit a little bit is the Northwest.” But the gain there is modest: an increase of about 14 percent in outdoor days projected for the last three decades of this century, compared to the period from 1976 to 2005. The Southwestern U.S., by comparison, faces an average loss of 23 percent of their outdoor days.The study also digs into the relationship between climate and economic activity by looking at tourism trends from U.S. National Park Service visitation data, and how that aligned with differences in climate conditions. “Accounting for seasonal variations, we find a clear connection between the number of outdoor days and the number of tourist visits in the United States,” Choi says.For much of the country, there will be little overall change in the total number of annual outdoor days, the study found, but the seasonal pattern of those days could change significantly. While most parts of the country now see the most outdoor days in summertime, that will shift as summers get hotter, and spring and fall will become the preferred seasons for outdoor activity.In a way, Eltahir says, “what we are talking about that will happen in the future [for most of the country] is already happening in Florida.” There, he says, “the really enjoyable time of year is in the spring and fall, and summer is not the best time of year.”People’s level of comfort with temperatures varies somewhat among individuals and among regions, so the researchers designed a tool, now freely available online, that allows people to set their own definitions of the lowest and highest temperatures they consider suitable for outdoor activities, and then see what the climate models predict would be the change in the number of outdoor days for their location, using their own standards of comfort. For their study, they used a widely accepted range of 10 degrees Celsius (50 degrees Fahrenheit) to 25 C (77 F), which is the “thermoneutral zone” in which the human body does not require either metabolic heat generation or evaporative cooling to maintain its core temperature — in other words, in that range there is generally no need to either shiver or sweat.The model mainly focuses on temperature but also allows people to include humidity or precipitation in their definition of what constitutes a comfortable outdoor day. The model could be extended to incorporate other variables such as air quality, but the researchers say temperature tends to be the major determinant of comfort for most people.Using their software tool, “If you disagree with how we define an outdoor day, you could define one for yourself, and then you’ll see what the impacts of that are on your number of outdoor days and their seasonality,” Eltahir says.This work was inspired by the realization, he says, that “people’s understanding of climate change is based on the assumption that climate change is something that’s going to happen sometime in the future and going to happen to someone else. It’s not going to impact them directly. And I think that contributes to the fact that we are not doing enough.”Instead, the concept of outdoor days “brings the concept of climate change home, brings it to personal everyday activities,” he says. “I hope that people will find that useful to bridge that gap, and provide a better understanding and appreciation of the problem. And hopefully that would help lead to sound policies that are based on science, regarding climate change.”The research was based on work supported by the Community Jameel for Jameel Observatory CREWSnet and Abdul Latif Jameel Water and Food Systems Lab at MIT. More

  • in

    Study evaluates impacts of summer heat in U.S. prison environments

    When summer temperatures spike, so does our vulnerability to heat-related illness or even death. For the most part, people can take measures to reduce their heat exposure by opening a window, turning up the air conditioning, or simply getting a glass of water. But for people who are incarcerated, freedom to take such measures is often not an option. Prison populations therefore are especially vulnerable to heat exposure, due to their conditions of confinement.A new study by MIT researchers examines summertime heat exposure in prisons across the United States and identifies characteristics within prison facilities that can further contribute to a population’s vulnerability to summer heat.The study’s authors used high-spatial-resolution air temperature data to determine the daily average outdoor temperature for each of 1,614 prisons in the U.S., for every summer between the years 1990 and 2023. They found that the prisons that are exposed to the most extreme heat are located in the southwestern U.S., while prisons with the biggest changes in summertime heat, compared to the historical record, are in the Pacific Northwest, the Northeast, and parts of the Midwest.Those findings are not entirely unique to prisons, as any non-prison facility or community in the same geographic locations would be exposed to similar outdoor air temperatures. But the team also looked at characteristics specific to prison facilities that could further exacerbate an incarcerated person’s vulnerability to heat exposure. They identified nine such facility-level characteristics, such as highly restricted movement, poor staffing, and inadequate mental health treatment. People living and working in prisons with any one of these characteristics may experience compounded risk to summertime heat. The team also looked at the demographics of 1,260 prisons in their study and found that the prisons with higher heat exposure on average also had higher proportions of non-white and Hispanic populations. The study, appearing today in the journal GeoHealth, provides policymakers and community leaders with ways to estimate, and take steps to address, a prison population’s heat risk, which they anticipate could worsen with climate change.“This isn’t a problem because of climate change. It’s becoming a worse problem because of climate change,” says study lead author Ufuoma Ovienmhada SM ’20, PhD ’24, a graduate of the MIT Media Lab, who recently completed her doctorate in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “A lot of these prisons were not built to be comfortable or humane in the first place. Climate change is just aggravating the fact that prisons are not designed to enable incarcerated populations to moderate their own exposure to environmental risk factors such as extreme heat.”The study’s co-authors include Danielle Wood, MIT associate professor of media arts and sciences, and of AeroAstro; and Brent Minchew, MIT associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences; along with Ahmed Diongue ’24, Mia Hines-Shanks of Grinnell College, and Michael Krisch of Columbia University.Environmental intersectionsThe new study is an extension of work carried out at the Media Lab, where Wood leads the Space Enabled research group. The group aims to advance social and environmental justice issues through the use of satellite data and other space-enabled technologies.The group’s motivation to look at heat exposure in prisons came in 2020 when, as co-president of MIT’s Black Graduate Student Union, Ovienmhada took part in community organizing efforts following the murder of George Floyd by Minneapolis police.“We started to do more organizing on campus around policing and reimagining public safety. Through that lens I learned more about police and prisons as interconnected systems, and came across this intersection between prisons and environmental hazards,” says Ovienmhada, who is leading an effort to map the various environmental hazards that prisons, jails, and detention centers face. “In terms of environmental hazards, extreme heat causes some of the most acute impacts for incarcerated people.”She, Wood, and their colleagues set out to use Earth observation data to characterize U.S. prison populations’ vulnerability, or their risk of experiencing negative impacts, from heat.The team first looked through a database maintained by the U.S. Department of Homeland Security that lists the location and boundaries of carceral facilities in the U.S. From the database’s more than 6,000 prisons, jails, and detention centers, the researchers highlighted 1,614 prison-specific facilities, which together incarcerate nearly 1.4 million people, and employ about 337,000 staff.They then looked to Daymet, a detailed weather and climate database that tracks daily temperatures across the United States, at a 1-kilometer resolution. For each of the 1,614 prison locations, they mapped the daily outdoor temperature, for every summer between the years 1990 to 2023, noting that the majority of current state and federal correctional facilities in the U.S. were built by 1990.The team also obtained U.S. Census data on each facility’s demographic and facility-level characteristics, such as prison labor activities and conditions of confinement. One limitation of the study that the researchers acknowledge is a lack of information regarding a prison’s climate control.“There’s no comprehensive public resource where you can look up whether a facility has air conditioning,” Ovienmhada notes. “Even in facilities with air conditioning, incarcerated people may not have regular access to those cooling systems, so our measurements of outdoor air temperature may not be far off from reality.”Heat factorsFrom their analysis, the researchers found that more than 98 percent of all prisons in the U.S. experienced at least 10 days in the summer that were hotter than every previous summer, on average, for a given location. Their analysis also revealed the most heat-exposed prisons, and the prisons that experienced the highest temperatures on average, were mostly in the Southwestern U.S. The researchers note that with the exception of New Mexico, the Southwest is a region where there are no universal air conditioning regulations in state-operated prisons.“States run their own prison systems, and there is no uniformity of data collection or policy regarding air conditioning,” says Wood, who notes that there is some information on cooling systems in some states and individual prison facilities, but the data is sparse overall, and too inconsistent to include in the group’s nationwide study.While the researchers could not incorporate air conditioning data, they did consider other facility-level factors that could worsen the effects that outdoor heat triggers. They looked through the scientific literature on heat, health impacts, and prison conditions, and focused on 17 measurable facility-level variables that contribute to heat-related health problems. These include factors such as overcrowding and understaffing.“We know that whenever you’re in a room that has a lot of people, it’s going to feel hotter, even if there’s air conditioning in that environment,” Ovienmhada says. “Also, staffing is a huge factor. Facilities that don’t have air conditioning but still try to do heat risk-mitigation procedures might rely on staff to distribute ice or water every few hours. If that facility is understaffed or has neglectful staff, that may increase people’s susceptibility to hot days.”The study found that prisons with any of nine of the 17 variables showed statistically significant greater heat exposures than the prisons without those variables. Additionally, if a prison exhibits any one of the nine variables, this could worsen people’s heat risk through the combination of elevated heat exposure and vulnerability. The variables, they say, could help state regulators and activists identify prisons to prioritize for heat interventions.“The prison population is aging, and even if you’re not in a ‘hot state,’ every state has responsibility to respond,” Wood emphasizes. “For instance, areas in the Northwest, where you might expect to be temperate overall, have experienced a number of days in recent years of increasing heat risk. A few days out of the year can still be dangerous, particularly for a population with reduced agency to regulate their own exposure to heat.”This work was supported, in part, by NASA, the MIT Media Lab, and MIT’s Institute for Data, Systems and Society’s Research Initiative on Combatting Systemic Racism. More

  • in

    3 Questions: The past, present, and future of sustainability science

    It was 1978, over a decade before the word “sustainable” would infiltrate environmental nomenclature, and Ronald Prinn, MIT professor of atmospheric science, had just founded the Advanced Global Atmospheric Gases Experiment (AGAGE). Today, AGAGE provides real-time measurements for well over 50 environmentally harmful trace gases, enabling us to determine emissions at the country level, a key element in verifying national adherence to the Montreal Protocol and the Paris Accord. This, Prinn says, started him thinking about doing science that informed decision making.Much like global interest in sustainability, Prinn’s interest and involvement continued to grow into what would become three decades worth of achievements in sustainability science. The Center for Global Change Science (CGCS) and Joint Program on the Science and Policy Global Change, respectively founded and co-founded by Prinn, have recently joined forces to create the MIT School of Science’s new Center for Sustainability Science and Strategy (CS3), lead by former CGCS postdoc turned MIT professor, Noelle Selin.As he prepares to pass the torch, Prinn reflects on how far sustainability has come, and where it all began.Q: Tell us about the motivation for the MIT centers you helped to found around sustainability.A: In 1990 after I founded the Center for Global Change Science, I also co-founded the Joint Program on the Science and Policy Global Change with a very important partner, [Henry] “Jake” Jacoby. He’s now retired, but at that point he was a professor in the MIT Sloan School of Management. Together, we determined that in order to answer questions related to what we now call sustainability of human activities, you need to combine the natural and social sciences involved in these processes. Based on this, we decided to make a joint program between the CGCS and a center that he directed, the Center for Energy and Environmental Policy Research (CEEPR).It was called the “joint program” and was joint for two reasons — not only were two centers joining, but two disciplines were joining. It was not about simply doing the same science. It was about bringing a team of people together that could tackle these coupled issues of environment, human development and economy. We were the first group in the world to fully integrate these elements together.Q: What has been your most impactful contribution and what effect did it have on the greater public’s overall understanding?A: Our biggest contribution is the development, and more importantly, the application of the Integrated Global System Model [IGSM] framework, looking at human development in both developing countries and developed countries that had a significant impact on the way people thought about climate issues. With IGSM, we were able to look at the interactions among human and natural components, studying the feedbacks and impacts that climate change had on human systems; like how it would alter agriculture and other land activities, how it would alter things we derive from the ocean, and so on.Policies were being developed largely by economists or climate scientists working independently, and we started showing how the real answers and analysis required a coupling of all of these components. We showed, and I think convincingly, that what people used to study independently, must be coupled together, because the impacts of climate change and air pollution affected so many things.To address the value of policy, despite the uncertainty in climate projections, we ran multiple runs of the IGSM with and without policy, with different choices for uncertain IGSM variables. For public communication, around 2005, we introduced our signature Greenhouse Gamble interactive visualization tools; these have been renewed over time as science and policies evolved.Q: What can MIT provide now at this critical juncture in understanding climate change and its impact?A: We need to further push the boundaries of integrated global system modeling to ensure full sustainability of human activity and all of its beneficial dimensions, which is the exciting focus that the CS3 is designed to address. We need to focus on sustainability as a central core element and use it to not just analyze existing policies but to propose new ones. Sustainability is not just climate or air pollution, it’s got to do with human impacts in general. Human health is central to sustainability, and equally important to equity. We need to expand the capability for credibly assessing what the impact policies have not just on developed countries, but on developing countries, taking into account that many places around the world are at artisanal levels of their economies. They cannot be blamed for anything that is changing climate and causing air pollution and other detrimental things that are currently going on. They need our help. That’s what sustainability is in its full dimensions.Our capabilities are evolving toward a modeling system so detailed that we can find out detrimental things about policies even at local levels before investing in changing infrastructure. This is going to require collaboration among even more disciplines and creating a seamless connection between research and decision making; not just for policies enacted in the public sector, but also for decisions that are made in the private sector.  More

  • in

    Study: Rocks from Mars’ Jezero Crater, which likely predate life on Earth, contain signs of water

    In a new study appearing today in the journal AGU Advances, scientists at MIT and NASA report that seven rock samples collected along the “fan front” of Mars’ Jezero Crater contain minerals that are typically formed in water. The findings suggest that the rocks were originally deposited by water, or may have formed in the presence of water.The seven samples were collected by NASA’s Perseverance rover in 2022 during its exploration of the crater’s western slope, where some rocks were hypothesized to have formed in what is now a dried-up ancient lake. Members of the Perseverance science team, including MIT scientists, have studied the rover’s images and chemical analyses of the samples, and confirmed that the rocks indeed contain signs of water, and that the crater was likely once a watery, habitable environment.Whether the crater was actually inhabited is yet unknown. The team found that the presence of organic matter — the starting material for life — cannot be confirmed, at least based on the rover’s measurements. But judging from the rocks’ mineral content, scientists believe the samples are their best chance of finding signs of ancient Martian life once the rocks are returned to Earth for more detailed analysis.“These rocks confirm the presence, at least temporarily, of habitable environments on Mars,” says the study’s lead author, Tanja Bosak, professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What we’ve found is that indeed there was a lot of water activity. For how long, we don’t know, but certainly for long enough to create these big sedimentary deposits.”What’s more, some of the collected samples may have originally been deposited in the ancient lake more than 3.5 billion years ago — before even the first signs of life on Earth.“These are the oldest rocks that may have been deposited by water, that we’ve ever laid hands or rover arms on,” says co-author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That’s exciting, because it means these are the most promising rocks that may have preserved fossils, and signatures of life.”The study’s MIT co-authors include postdoc Eva Scheller, and research scientist Elias Mansbach, along with members of the Perseverance science team.At the front

    NASA’s Perseverance rover collected rock samples from two locations seen in this image of Mars’ Jezero Crater: “Wildcat Ridge” (lower left) and “Skinner Ridge” (upper right).

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    The new rock samples were collected in 2022 as part of the rover’s Fan Front Campaign — an exploratory phase during which Perseverance traversed Jezero Crater’s western slope, where a fan-like region contains sedimentary, layered rocks. Scientists suspect that this “fan front” is an ancient delta that was created by sediment that flowed with a river and settled into a now bone-dry lakebed. If life existed on Mars, scientists believe that it could be preserved in the layers of sediment along the fan front.In the end, Perseverance collected seven samples from various locations along the fan front. The rover obtained each sample by drilling into the Martian bedrock and extracting a pencil-sized core, which it then sealed in a tube to one day be retrieved and returned to Earth for detailed analysis.

    Composed of multiple images from NASA’s Perseverance Mars rover, this mosaic shows a rocky outcrop called “Wildcat Ridge,” where the rover extracted two rock cores and abraded a circular patch to investigate the rock’s composition.

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    Prior to extracting the cores, the rover took images of the surrounding sediments at each of the seven locations. The science team then processed the imaging data to estimate a sediment’s average grain size and mineral composition. This analysis showed that all seven collected samples likely contain signs of water, suggesting that they were initially deposited by water.Specifically, Bosak and her colleagues found evidence of certain minerals in the sediments that are known to precipitate out of water.“We found lots of minerals like carbonates, which are what make reefs on Earth,” Bosak says. “And it’s really an ideal material that can preserve fossils of microbial life.”Interestingly, the researchers also identified sulfates in some samples that were collected at the base of the fan front. Sulfates are minerals that form in very salty water — another sign that water was present in the crater at one time — though very salty water, Bosak notes, “is not necessarily the best thing for life.” If the entire crater was once filled with very salty water, then it would be difficult for any form of life to thrive. But if only the bottom of the lake were briny, that could be an advantage, at least for preserving any signs of life that may have lived further up, in less salty layers, that eventually died and drifted down to the bottom.“However salty it was, if there were any organics present, it’s like pickling something in salt,” Bosak says. “If there was life that fell into the salty layer, it would be very well-preserved.”Fuzzy fingerprintsBut the team emphasizes that organic matter has not been confidently detected by the rover’s instruments. Organic matter can be signs of life, but can also be produced by certain geological processes that have nothing to do with living matter. Perseverance’s predecessor, the Curiosity rover, had detected organic matter throughout Mars’ Gale Crater, which scientists suspect may have come from asteroids that made impact with Mars in the past.And in a previous campaign, Perseverance detected what appeared to be organic molecules at multiple locations along Jezero Crater’s floor. These observations were taken by the rover’s Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instrument, which uses ultraviolet light to scan the Martian surface. If organics are present, they can glow, similar to material under a blacklight. The wavelengths at which the material glows act as a sort of fingerprint for the kind of organic molecules that are present.In Perseverance’s previous exploration of the crater floor, SHERLOC appeared to pick up signs of organic molecules throughout the region, and later, at some locations along the fan front. But a careful analysis, led by MIT’s Eva Scheller, has found that while the particular wavelengths observed could be signs of organic matter, they could just as well be signatures of substances that have nothing to do with organic matter.“It turns out that cerium metals incorporated in minerals actually produce very similar signals as the organic matter,” Scheller says. “When investigated, the potential organic signals were strongly correlated with phosphate minerals, which always contain some cerium.”Scheller’s work shows that the rover’s measurements cannot be interpreted definitively as organic matter.“This is not bad news,” Bosak says. “It just tells us there is not very abundant organic matter. It’s still possible that it’s there. It’s just below the rover’s detection limit.”When the collected samples are finally sent back to Earth, Bosak says laboratory instruments will have more than enough sensitivity to detect any organic matter that might lie within.“On Earth, once we have microscopes with nanometer-scale resolution, and various types of instruments that we cannot staff on one rover, then we can actually attempt to look for life,” she says.This work was supported, in part, by NASA. More