More stories

  • in

    3Q: Why Europe is so vulnerable to heat waves

    This year saw high-temperature records shattered across much of Europe, as crops withered in the fields due to widespread drought. Is this a harbinger of things to come as the Earth’s climate steadily warms up?

    Elfatih Eltahir, MIT professor of civil and environmental engineering and H. M. King Bhumibol Professor of Hydrology and Climate, and former doctoral student Alexandre Tuel PhD ’20 recently published a piece in the Bulletin of the Atomic Scientists describing how their research helps explain this anomalous European weather. The findings are based in part on analyses described in their book “Future Climate of the Mediterranean and Europe,” published earlier this year. MIT News asked the two authors to describe the dynamics behind these extreme weather events.

    Q: Was the European heat wave this summer anticipated based on existing climate models?

    Eltahir: Climate models project increasingly dry summers over Europe. This is especially true for the second half of the 21st century, and for southern Europe. Extreme dryness is often associated with hot conditions and heat waves, since any reduction in evaporation heats the soil and the air above it. In general, models agree in making such projections about European summers. However, understanding the physical mechanisms responsible for these projections is an active area of research.

    The same models that project dry summers over southern Europe also project dry winters over the neighboring Mediterranean Sea. In fact, the Mediterranean Sea stands out as one of the most significantly impacted regions — a literal “hot spot” — for winter droughts triggered by climate change. Again, until recently, the association between the projections of summer dryness over Europe and dry winters over the Mediterranean was not understood.

    In recent MIT doctoral research, carried out in the Department of Civil and Environmental Engineering, a hypothesis was developed to explain why the Mediterranean stands out as a hot spot for winter droughts under climate change. Further, the same theory offers a mechanistic understanding that connects the projections of dry summers over southern Europe and dry winters over the Mediterranean.

    What is exciting about the observed climate over Europe last summer is the fact that the observed drought started and developed with spatial and temporal patterns that are consistent with our proposed theory, and in particular the connection to the dry conditions observed over the Mediterranean during the previous winter.

    Q: What is it about the area around the Mediterranean basin that produces such unusual weather extremes?

    Eltahir: Multiple factors come together to cause extreme heat waves such as the one that Europe has experienced this summer, as well as previously, in 2003, 2015, 2018, 2019, and 2020. Among these, however, mutual influences between atmospheric dynamics and surface conditions, known as land-atmosphere feedbacks, seem to play a very important role.

    In the current climate, southern Europe is located in the transition zone between the dry subtropics (the Sahara Desert in North Africa) and the relatively wet midlatitudes (with a climate similar to that of the Pacific Northwest). High summertime temperatures tend to make the precipitation that falls to the ground evaporate quickly, and as a consequence soil moisture during summer is very dependent on springtime precipitation. A dry spring in Europe (such as the 2022 one) causes dry soils in late spring and early summer. This lack of surface water in turn limits surface evaporation during summer. Two important consequences follow: First, incoming radiative energy from the sun preferentially goes into increasing air temperature rather than evaporating water; and second, the inflow of water into air layers near the surface decreases, which makes the air drier and precipitation less likely. Combined, these two influences increase the likelihood of heat waves and droughts.

    Tuel: Through land-atmosphere feedbacks, dry springs provide a favorable environment for persistent warm and dry summers but are of course not enough to directly cause heat waves. A spark is required to ignite the fuel. In Europe and elsewhere, this spark is provided by large-scale atmospheric dynamics. If an anticyclone sets over an area with very dry soils, surface temperature can quickly shoot up as land-atmosphere feedbacks come into play, developing into a heat wave that can persist for weeks.

    The sensitivity to springtime precipitation makes southern Europe and the Mediterranean particularly prone to persistent summer heat waves. This will play an increasingly important role in the future, as spring precipitation is expected to decline, making scorching summers even more likely in this corner of the world. The decline in spring precipitation, which originates as an anomalously dry winter around the Mediterranean, is very robust across climate projections. Southern Europe and the Mediterranean really stand out from most other land areas, where precipitation will on average increase with global warming.

    In our work, we showed that this Mediterranean winter decline was driven by two independent factors: on the one hand, trends in the large-scale circulation, notably stationary atmospheric waves, and on the other hand, reduced warming of the Mediterranean Sea relative to the surrounding continents — a well-known feature of global warming. Both factors lead to increased surface air pressure and reduced precipitation over the Mediterranean and Southern Europe.

    Q: What can we expect over the coming decades in terms of the frequency and severity of these kinds of droughts, floods, and other extremes in European weather?

    Tuel: Climate models have long shown that the frequency and intensity of heat waves was bound to increase as the global climate warms, and Europe is no exception. The reason is simple: As the global temperature rises, the temperature distribution shifts toward higher values, and heat waves become more intense and more frequent. Southern Europe and the Mediterranean, however, will be hit particularly hard. The reason for this is related to the land-atmosphere feedbacks we just discussed. Winter precipitation over the Mediterranean and spring precipitation over southern Europe will decline significantly, which will lead to a decrease in early summer soil moisture over southern Europe and will push average summer temperatures even higher; the region will become a true climate change hot spot. In that sense, 2022 may really be a taste of the future. The succession of recent heat waves in Europe, however, suggests that things may be going faster than climate model projections imply. Decadal variability or badly understood trends in large-scale atmospheric dynamics may play a role here, though that is still debated. Another possibility is that climate models tend to underestimate the magnitude of land-atmosphere feedbacks and downplay the influence of dry soil moisture anomalies on summertime weather.

    Potential trends in floods are more difficult to assess because floods result from a multiplicity of factors, like extreme precipitation, soil moisture levels, or land cover. Extreme precipitation is generally expected to increase in most regions, but very high uncertainties remain, notably because extreme precipitation is highly dependent on atmospheric dynamics about which models do not always agree. What is almost certain is that with warming, the water content of the atmosphere increases (following a law of thermodynamics known as the Clausius-Clapeyron relationship). Thus, if the dynamics are favorable to precipitation, a lot more of it may fall in a warmer climate. Last year’s floods in Germany, for example, were triggered by unprecedented heavy rainfall which climate change made more likely. More

  • in

    Simulating neutron behavior in nuclear reactors

    Amelia Trainer applied to MIT because she lost a bet.

    As part of what the fourth-year nuclear science and engineering (NSE) doctoral student labels her “teenage rebellious phase,” Trainer was quite convinced she would just be wasting the application fee were she to submit an application. She wasn’t even “super sure” she wanted to go to college. But a high-school friend was convinced Trainer would get into a “top school” if she only applied. A bet followed: If Trainer lost, she would have to apply to MIT. Trainer lost — and is glad she did.

    Growing up in Daytona Beach, Florida, good grades were Trainer’s thing. Seeing friends participate in interschool math competitions, Trainer decided she would tag along and soon found she loved them. She remembers being adept at reading the room: If teams were especially struggling over a problem, Trainer figured the answer had to be something easy, like zero or one. “The hardest problems would usually have the most goofball answers,” she laughs.

    Simulating neutron behavior

    As a doctoral student, hard problems in math, specifically computational reactor physics, continue to be Trainer’s forte.

    Her research, under the guidance of Professor Benoit Forget in MIT NSE’s Computational Reactor Physics Group (CRPG), focuses on modeling complicated neutron behavior in reactors. Simulation helps forecast the behavior of reactors before millions of dollars sink into development of a potentially uneconomical unit. Using simulations, Trainer can see “where the neutrons are going, how much heat is being produced, and how much power the reactor can generate.” Her research helps form the foundation for the next generation of nuclear power plants.

    To simulate neutron behavior inside of a nuclear reactor, you first need to know how neutrons will interact with the various materials inside the system. These neutrons can have wildly different energies, thereby making them susceptible to different physical phenomena. For the entirety of her graduate studies, Trainer has been primarily interested in the physics regarding slow-moving neutrons and their scattering behavior.

    When a slow neutron scatters off of a material, it can induce or cancel out molecular vibrations between the material’s atoms. The effect that material vibrations can have on neutron energies, and thereby on reactor behavior, has been heavily approximated over the years. Trainer is primarily interested in chipping away at these approximations by creating scattering data for materials that have historically been misrepresented and by exploring new techniques for preparing slow-neutron scattering data.

    Trainer remembers waiting for a simulation to complete in the early days of the Covid-19 pandemic, when she discovered a way to predict neutron behavior with limited input data. Traditionally, “people have to store large tables of what neutrons will do under specific circumstances,” she says. “I’m really happy about it because it’s this really cool method of sampling what your neutron does from very little information,” Trainer says.

    Amelia Trainer — Modeling complicated neutron behavior in nuclear reactors

    As part of her research, Trainer often works closely with two software packages: OpenMC and NJOY. OpenMC is a Monte Carlo neutron transport simulation code that was developed in the CRPG and is used to simulate neutron behavior in reactor systems. NJOY is a nuclear data processing tool, and is used to create, augment, and prepare material data that is fed into tools like OpenMC. By editing both these codes to her specifications, Trainer is able to observe the effect that “upstream” material data has on the “downstream” reactor calculations. Through this, she hopes to identify additional problems: approximations that could lead to a noticeable misrepresentation of the physics.

    A love of geometry and poetry

    Trainer discovered the coolness of science as a child. Her mother, who cares for indoor plants and runs multiple greenhouses, and her father, a blacksmith and farrier, who explored materials science through his craft, were self-taught inspirations.

    Trainer’s father urged his daughter to learn and pursue any topics that she found exciting and encouraged her to read poems from “Calvin and Hobbes” out loud when she struggled with a speech impediment in early childhood. Reading the same passages every day helped her memorize them. “The natural manifestation of that extended into [a love of] poetry,” Trainer says.

    A love of poetry, combined with Trainer’s propensity for fun, led her to compose an ode to pi as part of an MIT-sponsored event for alumni. “I was really only in it for the cupcake,” she laughs. (Participants received an indulgent treat).

    Play video

    MIT Matters: A Love Poem to Pi

    Computations and nuclear science

    After being accepted at MIT, Trainer knew she wanted to study in a field that would take her skills at the levels they were at — “my math skills were pretty underdeveloped in the grand scheme of things,” she says. An open-house weekend at MIT, where she met with faculty from the NSE department, and the opportunity to contribute to a discipline working toward clean energy, cemented Trainer’s decision to join NSE.

    As a high schooler, Trainer won a scholarship to Embry-Riddle Aeronautical University to learn computer coding and knew computational physics might be more aligned with her interests. After she joined MIT as an undergraduate student in 2014, she realized that the CRPG, with its focus on coding and modeling, might be a good fit. Fortunately, a graduate student from Forget’s team welcomed Trainer’s enthusiasm for research even as an undergraduate first-year. She has stayed with the lab ever since. 

    Research internships at Los Alamos National Laboratory, the creators of NJOY, have furthered Trainer’s enthusiasm for modeling and computational physics. She met a Los Alamos scientist after he presented a talk at MIT and it snowballed into a collaboration where she could work on parts of the NJOY code. “It became a really cool collaboration which led me into a deep dive into physics and data preparation techniques, which was just so fulfilling,” Trainer says. As for what’s next, Trainer was awarded the Rickover fellowship in nuclear engineering by the the Department of Energy’s Naval Reactors Division and will join the program in Pittsburgh after she graduates.

    For many years, Trainer’s cats, Jacques and Monster, have been a constant companion. “Neutrons, computers, and cats, that’s my personality,” she laughs. Work continues to fuel her passion. To borrow a favorite phrase from Spaceman Spiff, Trainer’s favorite “Calvin” avatar, Trainer’s approach to research has invariably been: “Another day, another mind-boggling adventure.” More

  • in

    New process could enable more efficient plastics recycling

    The accumulation of plastic waste in the oceans, soil, and even in our bodies is one of the major pollution issues of modern times, with over 5 billion tons disposed of so far. Despite major efforts to recycle plastic products, actually making use of that motley mix of materials has remained a challenging issue.

    A key problem is that plastics come in so many different varieties, and chemical processes for breaking them down into a form that can be reused in some way tend to be very specific to each type of plastic. Sorting the hodgepodge of waste material, from soda bottles to detergent jugs to plastic toys, is impractical at large scale. Today, much of the plastic material gathered through recycling programs ends up in landfills anyway. Surely there’s a better way.

    According to new research from MIT and elsewhere, it appears there may indeed be a much better way. A chemical process using a catalyst based on cobalt has been found to be very effective at breaking down a variety of plastics, such as polyethylene (PET) and polypropylene (PP), the two most widely produced forms of plastic, into a single product, propane. Propane can then be used as a fuel for stoves, heaters, and vehicles, or as a feedstock for the production of a wide variety of products — including new plastics, thus potentially providing at least a partial closed-loop recycling system.

    The finding is described today in the open access journal  JACS Au, in a paper by MIT professor of chemical engineering Yuriy Román-Leshkov, postdoc Guido Zichitella, and seven others at MIT, the SLAC National Accelerator Laboratory, and the National Renewable Energy Laboratory.

    Recycling plastics has been a thorny problem, Román-Leshkov explains, because the long-chain molecules in plastics are held together by carbon bonds, which are “very stable and difficult to break apart.” Existing techniques for breaking these bonds tend to produce a random mix of different molecules, which would then require complex refining methods to separate out into usable specific compounds. “The problem is,” he says, “there’s no way to control where in the carbon chain you break the molecule.”

    But to the surprise of the researchers, a catalyst made of a microporous material called a zeolite that contains cobalt nanoparticles can selectively break down various plastic polymer molecules and turn more than 80 percent of them into propane.

    Although zeolites are riddled with tiny pores less than a nanometer wide (corresponding to the width of the polymer chains), a logical assumption had been that there would be little interaction at all between the zeolite and the polymers. Surprisingly, however, the opposite turned out to be the case: Not only do the polymer chains enter the pores, but the synergistic work between cobalt and the acid sites in the zeolite can break the chain at the same point. That cleavage site turned out to correspond to chopping off exactly one propane molecule without generating unwanted methane, leaving the rest of the longer hydrocarbons ready to undergo the process, again and again.

    “Once you have this one compound, propane, you lessen the burden on downstream separations,” Román-Leshkov says. “That’s the essence of why we think this is quite important. We’re not only breaking the bonds, but we’re generating mainly a single product” that can be used for many different products and processes.

    The materials needed for the process, zeolites and cobalt, “are both quite cheap” and widely available, he says, although today most cobalt comes from troubled areas in the Democratic Republic of Congo. Some new production is being developed in Canada, Cuba, and other places. The other material needed for the process is hydrogen, which today is mostly produced from fossil fuels but can easily be made other ways, including electrolysis of water using carbon-free electricity such as solar or wind power.

    The researchers tested their system on a real example of mixed recycled plastic, producing promising results. But more testing will be needed on a greater variety of mixed waste streams to determine how much fouling takes place from various contaminants in the material — such as inks, glues, and labels attached to the plastic containers, or other nonplastic materials that get mixed in with the waste — and how that affects the long-term stability of the process.

    Together with collaborators at NREL, the MIT team is also continuing to study the economics of the system, and analyzing how it can fit into today’s systems for handling plastic and mixed waste streams. “We don’t have all the answers yet,” Román-Leshkov says, but preliminary analysis looks promising.

    The research team included Amani Ebrahim and Simone Bare at the SLAC National Accelerator Laboratory; Jie Zhu, Anna Brenner, Griffin Drake and Julie Rorrer at MIT; and Greg Beckham at the National Renewable Energy Laboratory. The work was supported by the U.S. Department of Energy (DoE), the Swiss National Science Foundation, and the DoE’s Office of Energy Efficiency and Renewable Energy, Advanced Manufacturing Office (AMO), and Bioenergy Technologies Office (BETO), as part of the the Bio-Optimized Technologies to keep Thermoplastics out of Landfills and the Environment (BOTTLE) Consortium. More

  • in

    Scientists chart how exercise affects the body

    Exercise is well-known to help people lose weight and avoid gaining it. However, identifying the cellular mechanisms that underlie this process has proven difficult because so many cells and tissues are involved.

    In a new study in mice that expands researchers’ understanding of how exercise and diet affect the body, MIT and Harvard Medical School researchers have mapped out many of the cells, genes, and cellular pathways that are modified by exercise or high-fat diet. The findings could offer potential targets for drugs that could help to enhance or mimic the benefits of exercise, the researchers say.

    “It is extremely important to understand the molecular mechanisms that are drivers of the beneficial effects of exercise and the detrimental effects of a high-fat diet, so that we can understand how we can intervene, and develop drugs that mimic the impact of exercise across multiple tissues,” says Manolis Kellis, a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Broad Institute of MIT and Harvard.

    The researchers studied mice with high-fat or normal diets, who were either sedentary or given the opportunity to exercise whenever they wanted. Using single-cell RNA sequencing, the researchers cataloged the responses of 53 types of cells found in skeletal muscle and two types of fatty tissue.

    “One of the general points that we found in our study, which is overwhelmingly clear, is how high-fat diets push all of these cells and systems in one way, and exercise seems to be pushing them nearly all in the opposite way,” Kellis says. “It says that exercise can really have a major effect throughout the body.”

    Kellis and Laurie Goodyear, a professor of medicine at Harvard Medical School and senior investigator at the Joslin Diabetes Center, are the senior authors of the study, which appears today in the journal Cell Metabolism. Jiekun Yang, a research scientist in MIT CSAIL; Maria Vamvini, an instructor of medicine at the Joslin Diabetes Center; and Pasquale Nigro, an instructor of medicine at the Joslin Diabetes Center, are the lead authors of the paper.

    The risks of obesity

    Obesity is a growing health problem around the world. In the United States, more than 40 percent of the population is considered obese, and nearly 75 percent is overweight. Being overweight is a risk factor for many diseases, including heart disease, cancer, Alzheimer’s disease, and even infectious diseases such as Covid-19.

    “Obesity, along with aging, is a global factor that contributes to every aspect of human health,” Kellis says.

    Several years ago, his lab performed a study on the FTO gene region, which has been strongly linked to obesity risk. In that 2015 study, the research team found that genes in this region control a pathway that prompts immature fat cells called progenitor adipocytes to either become fat-burning cells or fat-storing cells.

    That finding, which demonstrated a clear genetic component to obesity, motivated Kellis to begin looking at how exercise, a well-known behavioral intervention that can prevent obesity, might act on progenitor adipocytes at the cellular level.

    To explore that question, Kellis and his colleagues decided to perform single-cell RNA sequencing of three types of tissue — skeletal muscle, visceral white adipose tissue (found packed around internal organs, where it stores fat), and subcutaneous white adipose tissue (which is found under the skin and primarily burns fat).

    These tissues came from mice from four different experimental groups. For three weeks, two groups of mice were fed either a normal diet or a high-fat diet. For the next three weeks, each of those two groups were further divided into a sedentary group and an exercise group, which had continuous access to a treadmill.

    By analyzing tissues from those mice, the researchers were able to comprehensively catalog the genes that were activated or suppressed by exercise in 53 different cell types.

    The researchers found that in all three tissue types, mesenchymal stem cells (MSCs) appeared to control many of the diet and exercise-induced effects that they observed. MSCs are stem cells that can differentiate into other cell types, including fat cells and fibroblasts. In adipose tissue, the researchers found that a high-fat diet modulated MSCs’ capacity to differentiate into fat-storing cells, while exercise reversed this effect.

    In addition to promoting fat storage, the researchers found that a high-fat diet also stimulated MSCs to secrete factors that remodel the extracellular matrix (ECM) — a network of proteins and other molecules that surround and support cells and tissues in the body. This ECM remodeling helps provide structure for enlarged fat-storing cells and also creates a more inflammatory environment.

    “As the adipocytes become overloaded with lipids, there’s an extreme amount of stress, and that causes low-grade inflammation, which is systemic and preserved for a long time,” Kellis says. “That is one of the factors that is contributing to many of the adverse effects of obesity.”

    Circadian effects

    The researchers also found that high-fat diets and exercise had opposing effects on cellular pathways that control circadian rhythms — the 24-hour cycles that govern many functions, from sleep to body temperature, hormone release, and digestion. The study revealed that exercise boosts the expression of genes that regulate these rhythms, while a high-fat diet suppresses them.

    “There have been a lot of studies showing that when you eat during the day is extremely important in how you absorb the calories,” Kellis says. “The circadian rhythm connection is a very important one, and shows how obesity and exercise are in fact directly impacting that circadian rhythm in peripheral organs, which could act systemically on distal clocks and regulate stem cell functions and immunity.”

    The researchers then compared their results to a database of human genes that have been linked with metabolic traits. They found that two of the circadian rhythm genes they identified in this study, known as DBP and CDKN1A, have genetic variants that have been associated with a higher risk of obesity in humans.

    “These results help us see the translational values of these targets, and how we could potentially target specific biological processes in specific cell types,” Yang says.

    The researchers are now analyzing samples of small intestine, liver, and brain tissue from the mice in this study, to explore the effects of exercise and high-fat diets on those tissues. They are also conducting work with human volunteers to sample blood and biopsies and study similarities and differences between human and mouse physiology. They hope that their findings will help guide drug developers in designing drugs that might mimic some of the beneficial effects of exercise.

    “The message for everyone should be, eat a healthy diet and exercise if possible,” Kellis says. “For those for whom this is not possible, due to low access to healthy foods, or due to disabilities or other factors that prevent exercise, or simply lack of time to have a healthy diet or a healthy lifestyle, what this study says is that we now have a better handle on the pathways, the specific genes, and the specific molecular and cellular processes that we should be manipulating therapeutically.”

    The research was funded by the National Institutes of Health and the Novo Nordisk Research Center in Seattle. More

  • in

    Processing waste biomass to reduce airborne emissions

    To prepare fields for planting, farmers the world over often burn corn stalks, rice husks, hay, straw, and other waste left behind from the previous harvest. In many places, the practice creates huge seasonal clouds of smog, contributing to air pollution that kills 7 million people globally a year, according to the World Health Organization.

    Annually, $120 billion worth of crop and forest residues are burned in the open worldwide — a major waste of resources in an energy-starved world, says Kevin Kung SM ’13, PhD ’17. Kung is working to transform this waste biomass into marketable products — and capitalize on a billion-dollar global market — through his MIT spinoff company, Takachar.

    Founded in 2015, Takachar develops small-scale, low-cost, portable equipment to convert waste biomass into solid fuel using a variety of thermochemical treatments, including one known as oxygen-lean torrefaction. The technology emerged from Kung’s PhD project in the lab of Ahmed Ghoniem, the Ronald C. Crane (1972) Professor of Mechanical Engineering at MIT.

    Biomass fuels, including wood, peat, and animal dung, are a major source of carbon emissions — but billions of people rely on such fuels for cooking, heating, and other household needs. “Currently, burning biomass generates 10 percent of the primary energy used worldwide, and the process is used largely in rural, energy-poor communities. We’re not going to change that overnight. There are places with no other sources of energy,” Ghoniem says.

    What Takachar’s technology provides is a way to use biomass more cleanly and efficiently by concentrating the fuel and eliminating contaminants such as moisture and dirt, thus creating a “clean-burning” fuel — one that generates less smoke. “In rural communities where biomass is used extensively as a primary energy source, torrefaction will address air pollution head-on,” Ghoniem says.

    Thermochemical treatment densifies biomass at elevated temperatures, converting plant materials that are typically loose, wet, and bulky into compact charcoal. Centralized processing plants exist, but collection and transportation present major barriers to utilization, Kung says. Takachar’s solution moves processing into the field: To date, Takachar has worked with about 5,500 farmers to process 9,000 metric tons of crops.

    Takachar estimates its technology has the potential to reduce carbon dioxide equivalent emissions by gigatons per year at scale. (“Carbon dioxide equivalent” is a measure used to gauge global warming potential.) In recognition, in 2021 Takachar won the first-ever Earthshot Prize in the clean air category, a £1 million prize funded by Prince William and Princess Kate’s Royal Foundation.

    Roots in Kenya

    As Kung tells the story, Takachar emerged from a class project that took him to Kenya — which explains the company’s name, a combination of takataka, which mean “trash” in Swahili, and char, for the charcoal end product.

    It was 2011, and Kung was at MIT as a biological engineering grad student focused on cancer research. But “MIT gives students big latitude for exploration, and I took courses outside my department,” he says. In spring 2011, he signed up for a class known as 15.966 (Global Health Delivery Lab) in the MIT Sloan School of Management. The class brought Kung to Kenya to work with a nongovernmental organization in Nairobi’s Kibera, the largest urban slum in Africa.

    “We interviewed slum households for their views on health, and that’s when I noticed the charcoal problem,” Kung says. The problem, as Kung describes it, was that charcoal was everywhere in Kibera — piled up outside, traded by the road, and used as the primary fuel, even indoors. Its creation contributed to deforestation, and its smoke presented a serious health hazard.

    Eager to address this challenge, Kung secured fellowship support from the MIT International Development Initiative and the Priscilla King Gray Public Service Center to conduct more research in Kenya. In 2012, he formed Takachar as a team and received seed money from the MIT IDEAS Global Challenge, MIT Legatum Center for Development and Entrepreneurship, and D-Lab to produce charcoal from household organic waste. (This work also led to a fertilizer company, Safi Organics, that Kung founded in 2016 with the help of MIT IDEAS. But that is another story.)

    Meanwhile, Kung had another top priority: finding a topic for his PhD dissertation. Back at MIT, he met Alexander Slocum, the Walter M. May and A. Hazel May Professor of Mechanical Engineering, who on a long walk-and-talk along the Charles River suggested he turn his Kenya work into a thesis. Slocum connected him with Robert Stoner, deputy director for science and technology at the MIT Energy Initiative (MITEI) and founding director of MITEI’s Tata Center for Technology and Design. Stoner in turn introduced Kung to Ghoniem, who became his PhD advisor, while Slocum and Stoner joined his doctoral committee.

    Roots in MIT lab

    Ghoniem’s telling of the Takachar story begins, not surprisingly, in the lab. Back in 2010, he had a master’s student interested in renewable energy, and he suggested the student investigate biomass. That student, Richard Bates ’10, SM ’12, PhD ’16, began exploring the science of converting biomass to more clean-burning charcoal through torrefaction.

    Most torrefaction (also known as low-temperature pyrolysis) systems use external heating sources, but the lab’s goal, Ghoniem explains, was to develop an efficient, self-sustained reactor that would generate fewer emissions. “We needed to understand the chemistry and physics of the process, and develop fundamental scaling models, before going to the lab to build the device,” he says.

    By the time Kung joined the lab in 2013, Ghoniem was working with the Tata Center to identify technology suitable for developing countries and largely based on renewable energy. Kung was able to secure a Tata Fellowship and — building on Bates’ research — develop the small-scale, practical device for biomass thermochemical conversion in the field that launched Takachar.

    This device, which was patented by MIT with inventors Kung, Ghoniem, Stoner, MIT research scientist Santosh Shanbhogue, and Slocum, is self-contained and scalable. It burns a little of the biomass to generate heat; this heat bakes the rest of the biomass, releasing gases; the system then introduces air to enable these gases to combust, which burns off the volatiles and generates more heat, keeping the thermochemical reaction going.

    “The trick is how to introduce the right amount of air at the right location to sustain the process,” Ghoniem explains. “If you put in more air, that will burn the biomass. If you put in less, there won’t be enough heat to produce the charcoal. That will stop the reaction.”

    About 10 percent of the biomass is used as fuel to support the reaction, Kung says, adding that “90 percent is densified into a form that’s easier to handle and utilize.” He notes that the research received financial support from the Abdul Latif Jameel Water and Food Systems Lab and the Deshpande Center for Technological Innovation, both at MIT. Sonal Thengane, another postdoc in Ghoniem’s lab, participated in the effort to scale up the technology at the MIT Bates Lab (no relation to Richard Bates).

    The charcoal produced is more valuable per ton and easier to transport and sell than biomass, reducing transportation costs by two-thirds and giving farmers an additional income opportunity — and an incentive not to burn agricultural waste, Kung says. “There’s more income for farmers, and you get better air quality.”

    Roots in India

    When Kung became a Tata Fellow, he joined a program founded to take on the biggest challenges of the developing world, with a focus on India. According to Stoner, Tata Fellows, including Kung, typically visit India twice a year and spend six to eight weeks meeting stakeholders in industry, the government, and in communities to gain perspective on their areas of study.

    “A unique part of Tata is that you’re considering the ecosystem as a whole,” says Kung, who interviewed hundreds of smallholder farmers, met with truck drivers, and visited existing biomass processing plants during his Tata trips to India. (Along the way, he also connected with Indian engineer Vidyut Mohan, who became Takachar’s co-founder.)

    “It was very important for Kevin to be there walking about, experimenting, and interviewing farmers,” Stoner says. “He learned about the lives of farmers.”

    These experiences helped instill in Kung an appreciation for small farmers that still drives him today as Takachar rolls out its first pilot programs, tinkers with the technology, grows its team (now up to 10), and endeavors to build a revenue stream. So, while Takachar has gotten a lot of attention and accolades — from the IDEAS award to the Earthshot Prize — Kung says what motivates him is the prospect of improving people’s lives.

    The dream, he says, is to empower communities to help both the planet and themselves. “We’re excited about the environmental justice perspective,” he says. “Our work brings production and carbon removal or avoidance to rural communities — providing them with a way to convert waste, make money, and reduce air pollution.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More

  • in

    3 Questions: Janelle Knox-Hayes on producing renewable energy that communities want

    Wind power accounted for 8 percent of U.S. electricity consumption in 2020, and is growing rapidly in the country’s energy portfolio. But some projects, like the now-defunct Cape Wind proposal for offshore power in Massachusetts, have run aground due to local opposition. Are there ways to avoid this in the future?

    MIT professors Janelle Knox-Hayes and Donald Sadoway think so. In a perspective piece published today in the journal Joule, they and eight other professors call for a new approach to wind-power deployment, one that engages communities in a process of “co-design” and adapts solutions to local needs. That process, they say, could spur additional creativity in renewable energy engineering, while making communities more amenable to existing technologies. In addition to Knox-Hayes and Sadoway, the paper’s co-authors are Michael J. Aziz of Harvard University; Dennice F. Gayme of Johns Hopkins University; Kathryn Johnson of the Colorado School of Mines; Perry Li of the University of Minnesota; Eric Loth of the University of Virginia; Lucy Y. Pao of the University of Colorado; Jessica Smith of the Colorado School of Mines; and Sonya Smith of Howard University.

    Knox-Hayes is the Lister Brothers Associate Professor of Economic Geography and Planning in MIT’s Department of Urban Studies and Planning, and an expert on the social and political context of renewable energy adoption; Sadoway is the John F. Elliott Professor of Materials Chemistry in MIT’s Department of Materials Science and Engineering, and a leading global expert on developing new forms of energy storage. MIT News spoke with Knox-Hayes about the topic.

    Q: What is the core problem you are addressing in this article?

    A: It is problematic to act as if technology can only be engineered in a silo and then delivered to society. To solve problems like climate change, we need to see technology as a socio-technical system, which is integrated from its inception into society. From a design standpoint, that begins with conversations, values assessments, and understanding what communities need.  If we can do that, we will have a much easier time delivering the technology in the end.

    What we have seen in the Northeast, in trying to meet our climate objectives and energy efficiency targets, is that we need a lot of offshore wind, and a lot of projects have stalled because a community was saying “no.” And part of the reason communities refuse projects is because they that they’ve never been properly consulted. What form does the technology take, and how would it operate within a community? That conversation can push the boundaries of engineering.

    Q: The new paper makes the case for a new practice of “co-design” in the field of renewable energy. You call this the “STEP” process, standing for all the socio-technical-political-economic issues that an engineering project might encounter. How would you describe the STEP idea? And to what extent would industry be open to new attempts to design an established technology?

    A: The idea is to bring together all these elements in an interdisciplinary process, and engage stakeholders. The process could start with a series of community forums where we bring everyone together, and do a needs assessment, which is a common practice in planning. We might see that offshore wind energy needs to be considered in tandem with the local fishing industry, or servicing the installations, or providing local workforce training. The STEP process allows us to take a step back, and start with planners, policymakers, and community members on the ground.

    It is also about changing the nature of research and practice and teaching, so that students are not just in classrooms, they are also learning to work with communities. I think formalizing that piece is important. We are starting now to really feel the impacts of climate change, so we have to confront the reality of breaking through political boundaries, even in the United States. That is the only way to make this successful, and that comes back to how can technology be co-designed.

    At MIT, innovation is the spirit of the endeavor, and that is why MIT has so many industry partners engaged in initiatives like MITEI [the MIT Energy Initiative] and the Climate Consortium. The value of the partnership is that MIT pushes the boundaries of what is possible. It is the idea that we can advance and we can do something incredible, we can innovate the future. What we are suggesting with this work is that innovation isn’t something that happens exclusively in a laboratory, but something that is very much built in partnership with communities and other stakeholders.

    Q: How much does this approach also apply to solar power, as the other leading type of renewable energy? It seems like communities also wrestle with where to locate solar arrays, or how to compensate homeowners, communities, and other solar hosts for the power they generate.

    A: I would not say solar has the same set of challenges, but rather that renewable technologies face similar challenges. With solar, there are also questions of access and siting. Another big challenge is to create financing models that provide value and opportunity at different scales. For example, is solar viable for tenants in multi-family units who want to engage with clean energy? This is a similar question for micro-wind opportunities for buildings. With offshore wind, a restriction is that if it is within sightlines, it might be problematic. But there are exciting technologies that have enabled deep wind, or the establishment of floating turbines up to 50 kilometers offshore. Storage solutions such as hydro-pneumatic energy storage, gravity energy storage or buoyancy storage can help maintain the transmission rate while reducing the number of transmission lines needed.

    In a lot of communities, the reality of renewables is that if you can generate your own energy, you can establish a level of security and resilience that feeds other benefits. 

    Nevertheless, as demonstrated in the Cape Wind case, technology [may be rejected] unless a community is involved from the beginning. Community involvement also creates other opportunities. Suppose, for example, that high school students are working as interns on renewable energy projects with engineers at great universities from the region. This provides a point of access for families and allows them to take pride in the systems they create.  It gives a further sense of purpose to the technology system, and vests the community in the system’s success. It is the difference between, “It was delivered to me,” and “I built it.” For researchers the article is a reminder that engineering and design are more successful if they are inclusive. Engineering and design processes are also meant to be accessible and fun. More

  • in

    Passive cooling system could benefit off-grid locations

    As the world gets warmer, the use of power-hungry air conditioning systems is projected to increase significantly, putting a strain on existing power grids and bypassing many locations with little or no reliable electric power. Now, an innovative system developed at MIT offers a way to use passive cooling to preserve food crops and supplement conventional air conditioners in buildings, with no need for power and only a small need for water.

    The system, which combines radiative cooling, evaporative cooling, and thermal insulation in a slim package that could resemble existing solar panels, can provide up to about 19 degrees Fahrenheit (9.3 degrees Celsius) of cooling from the ambient temperature, enough to permit safe food storage for about 40 percent longer under very humid conditions. It could triple the safe storage time under dryer conditions.

    The findings are reported today in the journal Cell Reports Physical Science, in a paper by MIT postdoc Zhengmao Lu, Arny Leroy PhD ’21, professors Jeffrey Grossman and Evelyn Wang, and two others. While more research is needed in order to bring down the cost of one key component of the system, the researchers say that eventually such a system could play a significant role in meeting the cooling needs of many parts of the world where a lack of electricity or water limits the use of conventional cooling systems.

    The system cleverly combines previous standalone cooling designs that each provide limited amounts of cooling power, in order to produce significantly more cooling overall — enough to help reduce food losses from spoilage in parts of the world that are already suffering from limited food supplies. In recognition of that potential, the research team has been partly supported by MIT’s Abdul Latif Jameel Water and Food Systems Lab.

    “This technology combines some of the good features of previous technologies such as evaporative cooling and radiative cooling,” Lu says. By using this combination, he says, “we show that you can achieve significant food life extension, even in areas where you have high humidity,” which limits the capabilities of conventional evaporative or radiative cooling systems.

    In places that do have existing air conditioning systems in buildings, the new system could be used to significantly reduce the load on these systems by sending cool water to the hottest part of the system, the condenser. “By lowering the condenser temperature, you can effectively increase the air conditioner efficiency, so that way you can potentially save energy,” Lu says.

    Other groups have also been pursuing passive cooling technologies, he says, but “by combining those features in a synergistic way, we are now able to achieve high cooling performance, even in high-humidity areas where previous technology generally cannot perform well.”

    The system consists of three layers of material, which together provide cooling as water and heat pass through the device. In practice, the device could resemble a conventional solar panel, but instead of putting out electricity, it would directly provide cooling, for example by acting as the roof of a food storage container. Or, it could be used to send chilled water through pipes to cool parts of an existing air conditioning system and improve its efficiency. The only maintenance required is adding water for the evaporation, but the consumption is so low that this need only be done about once every four days in the hottest, driest areas, and only once a month in wetter areas.

    The top layer is an aerogel, a material consisting mostly of air enclosed in the cavities of a sponge-like structure made of polyethylene. The material is highly insulating but freely allows both water vapor and infrared radiation to pass through. The evaporation of water (rising up from the layer below) provides some of the cooling power, while the infrared radiation, taking advantage of the extreme transparency of Earth’s atmosphere at those wavelengths, radiates some of the heat straight up through the air and into space — unlike air conditioners, which spew hot air into the immediate surrounding environment.

    Below the aerogel is a layer of hydrogel — another sponge-like material, but one whose pore spaces filled with water rather than air. It’s similar to material currently used commercially for products such as cooling pads or wound dressings. This provides the water source for evaporative cooling, as water vapor forms at its surface and the vapor passes up right through the aerogel layer and out to the environment.

    Below that, a mirror-like layer reflects any incoming sunlight that has reached it, sending it back up through the device rather than letting it heat up the materials and thus reducing their thermal load. And the top layer of aerogel, being a good insulator, is also highly solar-reflecting, limiting the amount of solar heating of the device, even under strong direct sunlight.

    “The novelty here is really just bringing together the radiative cooling feature, the evaporative cooling feature, and also the thermal insulation feature all together in one architecture,” Lu explains. The system was tested, using a small version, just 4 inches across, on the rooftop of a building at MIT, proving its effectiveness even during suboptimal weather conditions, Lu says, and achieving 9.3 C of cooling (18.7 F).

    “The challenge previously was that evaporative materials often do not deal with solar absorption well,” Lu says. “With these other materials, usually when they’re under the sun, they get heated, so they are unable to get to high cooling power at the ambient temperature.”

    The aerogel material’s properties are a key to the system’s overall efficiency, but that material at present is expensive to produce, as it requires special equipment for critical point drying (CPD) to remove solvents slowly from the delicate porous structure without damaging it. The key characteristic that needs to be controlled to provide the desired characteristics is the size of the pores in the aerogel, which is made by mixing the polyethylene material with solvents, allowing it to set like a bowl of Jell-O, and then getting the solvents out of it. The research team is currently exploring ways of either making this drying process more inexpensive, such as by using freeze-drying, or finding alternative materials that can provide the same insulating function at lower cost, such as membranes separated by an air gap.

    While the other materials used in the system are readily available and relatively inexpensive, Lu says, “the aerogel is the only material that’s a product from the lab that requires further development in terms of mass production.” And it’s impossible to predict how long that development might take before this system can be made practical for widespread use, he says.

    The research team included Lenan Zhang of MIT’s Department of Mechanical Engineering and Jatin Patil of the Department of Materials Science and Engineering. More