More stories

  • in

    3 Questions: Blue hydrogen and the world’s energy systems

    In the past several years, hydrogen energy has increasingly become a more central aspect of the clean energy transition. Hydrogen can produce clean, on-demand energy that could complement variable renewable energy sources such as wind and solar power. That being said, pathways for deploying hydrogen at scale have yet to be fully explored. In particular, the optimal form of hydrogen production remains in question.

    MIT Energy Initiative Research Scientist Emre Gençer and researchers from a wide range of global academic and research institutions recently published “On the climate impacts of blue hydrogen production,” a comprehensive life-cycle assessment analysis of blue hydrogen, a term referring to natural gas-based hydrogen production with carbon capture and storage. Here, Gençer describes blue hydrogen and the role that hydrogen will play more broadly in decarbonizing the world’s energy systems.

    Q: What are the differences between gray, green, and blue hydrogen?

    A: Though hydrogen does not generate any emissions directly when it is used, hydrogen production can have a huge environmental impact. Colors of hydrogen are increasingly used to distinguish different production methods and as a proxy to represent the associated environmental impact. Today, close to 95 percent of hydrogen production comes from fossil resources. As a result, the carbon dioxide (CO2) emissions from hydrogen production are quite high. Gray, black, and brown hydrogen refer to fossil-based production. Gray is the most common form of production and comes from natural gas, or methane, using steam methane reformation but without capturing CO2.

    There are two ways to move toward cleaner hydrogen production. One is applying carbon capture and storage to the fossil fuel-based hydrogen production processes. Natural gas-based hydrogen production with carbon capture and storage is referred to as blue hydrogen. If substantial amounts of CO2 from natural gas reforming are captured and permanently stored, such hydrogen could be a low-carbon energy carrier. The second way to produce cleaner hydrogen is by using electricity to produce hydrogen via electrolysis. In this case, the source of the electricity determines the environmental impact of the hydrogen, with the lowest impact being achieved when electricity is generated from renewable sources, such as wind and solar. This is known as green hydrogen.

    Q: What insights have you gleaned with a life cycle assessment (LCA) of blue hydrogen and other low-carbon energy systems?

    A: Mitigating climate change requires significant decarbonization of the global economy. Accurate estimation of cumulative greenhouse gas (GHG) emissions and its reduction pathways is critical irrespective of the source of emissions. An LCA approach allows the quantification of the environmental life cycle of a commercial product, process, or service impact with all the stages (cradle-to-grave). The LCA-based comparison of alternative energy pathways, fuel options, etc., provides an apples-to-apples comparison of low-carbon energy choices. In the context of low-carbon hydrogen, it is essential to understand the GHG impact of supply chain options. Depending on the production method, contribution of life-cycle stages to the total emissions might vary. For example, with natural gas–based hydrogen production, emissions associated with production and transport of natural gas might be a significant contributor based on its leakage and flaring rates. If these rates are not precisely accounted for, the environmental impact of blue hydrogen can be underestimated. However, the same rationale is also true for electricity-based hydrogen production. If the electricity is not supplied from low-
carbon sources such as wind, solar, or nuclear, the carbon intensity of hydrogen can be significantly underestimated. In the case of nuclear, there are also other environmental impact considerations.

    An LCA approach — if performed with consistent system boundaries — can provide an accurate environmental impact comparison. It should also be noted that these estimations can only be as good as the assumptions and correlations used unless they are supported by measurements. 

    Q: What conditions are needed to make blue hydrogen production most effective, and how can it complement other decarbonization pathways?

    A: Hydrogen is considered one of the key vectors for the decarbonization of hard-to-abate sectors such as heavy-duty transportation. Currently, more than 95 percent of global hydrogen production is fossil-fuel based. In the next decade, massive amounts of hydrogen must be produced to meet this anticipated demand. It is very hard, if not impossible, to meet this demand without leveraging existing production assets. The immediate and relatively cost-effective option is to retrofit existing plants with carbon capture and storage (blue hydrogen).

    The environmental impact of blue hydrogen may vary over large ranges but depends on only a few key parameters: the methane emission rate of the natural gas supply chain, the CO2 removal rate at the hydrogen production plant, and the global warming metric applied. State-of-the-art reforming with high CO2 capture rates, combined with natural gas supply featuring low methane emissions, substantially reduces GHG emissions compared to conventional natural gas reforming. Under these conditions, blue hydrogen is compatible with low-carbon economies and exhibits climate change impacts at the upper end of the range of those caused by hydrogen production from renewable-based electricity. However, neither current blue nor green hydrogen production pathways render fully “net-zero” hydrogen without additional CO2 removal.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Simulating neutron behavior in nuclear reactors

    Amelia Trainer applied to MIT because she lost a bet.

    As part of what the fourth-year nuclear science and engineering (NSE) doctoral student labels her “teenage rebellious phase,” Trainer was quite convinced she would just be wasting the application fee were she to submit an application. She wasn’t even “super sure” she wanted to go to college. But a high-school friend was convinced Trainer would get into a “top school” if she only applied. A bet followed: If Trainer lost, she would have to apply to MIT. Trainer lost — and is glad she did.

    Growing up in Daytona Beach, Florida, good grades were Trainer’s thing. Seeing friends participate in interschool math competitions, Trainer decided she would tag along and soon found she loved them. She remembers being adept at reading the room: If teams were especially struggling over a problem, Trainer figured the answer had to be something easy, like zero or one. “The hardest problems would usually have the most goofball answers,” she laughs.

    Simulating neutron behavior

    As a doctoral student, hard problems in math, specifically computational reactor physics, continue to be Trainer’s forte.

    Her research, under the guidance of Professor Benoit Forget in MIT NSE’s Computational Reactor Physics Group (CRPG), focuses on modeling complicated neutron behavior in reactors. Simulation helps forecast the behavior of reactors before millions of dollars sink into development of a potentially uneconomical unit. Using simulations, Trainer can see “where the neutrons are going, how much heat is being produced, and how much power the reactor can generate.” Her research helps form the foundation for the next generation of nuclear power plants.

    To simulate neutron behavior inside of a nuclear reactor, you first need to know how neutrons will interact with the various materials inside the system. These neutrons can have wildly different energies, thereby making them susceptible to different physical phenomena. For the entirety of her graduate studies, Trainer has been primarily interested in the physics regarding slow-moving neutrons and their scattering behavior.

    When a slow neutron scatters off of a material, it can induce or cancel out molecular vibrations between the material’s atoms. The effect that material vibrations can have on neutron energies, and thereby on reactor behavior, has been heavily approximated over the years. Trainer is primarily interested in chipping away at these approximations by creating scattering data for materials that have historically been misrepresented and by exploring new techniques for preparing slow-neutron scattering data.

    Trainer remembers waiting for a simulation to complete in the early days of the Covid-19 pandemic, when she discovered a way to predict neutron behavior with limited input data. Traditionally, “people have to store large tables of what neutrons will do under specific circumstances,” she says. “I’m really happy about it because it’s this really cool method of sampling what your neutron does from very little information,” Trainer says.

    Amelia Trainer — Modeling complicated neutron behavior in nuclear reactors

    As part of her research, Trainer often works closely with two software packages: OpenMC and NJOY. OpenMC is a Monte Carlo neutron transport simulation code that was developed in the CRPG and is used to simulate neutron behavior in reactor systems. NJOY is a nuclear data processing tool, and is used to create, augment, and prepare material data that is fed into tools like OpenMC. By editing both these codes to her specifications, Trainer is able to observe the effect that “upstream” material data has on the “downstream” reactor calculations. Through this, she hopes to identify additional problems: approximations that could lead to a noticeable misrepresentation of the physics.

    A love of geometry and poetry

    Trainer discovered the coolness of science as a child. Her mother, who cares for indoor plants and runs multiple greenhouses, and her father, a blacksmith and farrier, who explored materials science through his craft, were self-taught inspirations.

    Trainer’s father urged his daughter to learn and pursue any topics that she found exciting and encouraged her to read poems from “Calvin and Hobbes” out loud when she struggled with a speech impediment in early childhood. Reading the same passages every day helped her memorize them. “The natural manifestation of that extended into [a love of] poetry,” Trainer says.

    A love of poetry, combined with Trainer’s propensity for fun, led her to compose an ode to pi as part of an MIT-sponsored event for alumni. “I was really only in it for the cupcake,” she laughs. (Participants received an indulgent treat).

    Play video

    MIT Matters: A Love Poem to Pi

    Computations and nuclear science

    After being accepted at MIT, Trainer knew she wanted to study in a field that would take her skills at the levels they were at — “my math skills were pretty underdeveloped in the grand scheme of things,” she says. An open-house weekend at MIT, where she met with faculty from the NSE department, and the opportunity to contribute to a discipline working toward clean energy, cemented Trainer’s decision to join NSE.

    As a high schooler, Trainer won a scholarship to Embry-Riddle Aeronautical University to learn computer coding and knew computational physics might be more aligned with her interests. After she joined MIT as an undergraduate student in 2014, she realized that the CRPG, with its focus on coding and modeling, might be a good fit. Fortunately, a graduate student from Forget’s team welcomed Trainer’s enthusiasm for research even as an undergraduate first-year. She has stayed with the lab ever since. 

    Research internships at Los Alamos National Laboratory, the creators of NJOY, have furthered Trainer’s enthusiasm for modeling and computational physics. She met a Los Alamos scientist after he presented a talk at MIT and it snowballed into a collaboration where she could work on parts of the NJOY code. “It became a really cool collaboration which led me into a deep dive into physics and data preparation techniques, which was just so fulfilling,” Trainer says. As for what’s next, Trainer was awarded the Rickover fellowship in nuclear engineering by the the Department of Energy’s Naval Reactors Division and will join the program in Pittsburgh after she graduates.

    For many years, Trainer’s cats, Jacques and Monster, have been a constant companion. “Neutrons, computers, and cats, that’s my personality,” she laughs. Work continues to fuel her passion. To borrow a favorite phrase from Spaceman Spiff, Trainer’s favorite “Calvin” avatar, Trainer’s approach to research has invariably been: “Another day, another mind-boggling adventure.” More

  • in

    New process could enable more efficient plastics recycling

    The accumulation of plastic waste in the oceans, soil, and even in our bodies is one of the major pollution issues of modern times, with over 5 billion tons disposed of so far. Despite major efforts to recycle plastic products, actually making use of that motley mix of materials has remained a challenging issue.

    A key problem is that plastics come in so many different varieties, and chemical processes for breaking them down into a form that can be reused in some way tend to be very specific to each type of plastic. Sorting the hodgepodge of waste material, from soda bottles to detergent jugs to plastic toys, is impractical at large scale. Today, much of the plastic material gathered through recycling programs ends up in landfills anyway. Surely there’s a better way.

    According to new research from MIT and elsewhere, it appears there may indeed be a much better way. A chemical process using a catalyst based on cobalt has been found to be very effective at breaking down a variety of plastics, such as polyethylene (PET) and polypropylene (PP), the two most widely produced forms of plastic, into a single product, propane. Propane can then be used as a fuel for stoves, heaters, and vehicles, or as a feedstock for the production of a wide variety of products — including new plastics, thus potentially providing at least a partial closed-loop recycling system.

    The finding is described today in the open access journal  JACS Au, in a paper by MIT professor of chemical engineering Yuriy Román-Leshkov, postdoc Guido Zichitella, and seven others at MIT, the SLAC National Accelerator Laboratory, and the National Renewable Energy Laboratory.

    Recycling plastics has been a thorny problem, Román-Leshkov explains, because the long-chain molecules in plastics are held together by carbon bonds, which are “very stable and difficult to break apart.” Existing techniques for breaking these bonds tend to produce a random mix of different molecules, which would then require complex refining methods to separate out into usable specific compounds. “The problem is,” he says, “there’s no way to control where in the carbon chain you break the molecule.”

    But to the surprise of the researchers, a catalyst made of a microporous material called a zeolite that contains cobalt nanoparticles can selectively break down various plastic polymer molecules and turn more than 80 percent of them into propane.

    Although zeolites are riddled with tiny pores less than a nanometer wide (corresponding to the width of the polymer chains), a logical assumption had been that there would be little interaction at all between the zeolite and the polymers. Surprisingly, however, the opposite turned out to be the case: Not only do the polymer chains enter the pores, but the synergistic work between cobalt and the acid sites in the zeolite can break the chain at the same point. That cleavage site turned out to correspond to chopping off exactly one propane molecule without generating unwanted methane, leaving the rest of the longer hydrocarbons ready to undergo the process, again and again.

    “Once you have this one compound, propane, you lessen the burden on downstream separations,” Román-Leshkov says. “That’s the essence of why we think this is quite important. We’re not only breaking the bonds, but we’re generating mainly a single product” that can be used for many different products and processes.

    The materials needed for the process, zeolites and cobalt, “are both quite cheap” and widely available, he says, although today most cobalt comes from troubled areas in the Democratic Republic of Congo. Some new production is being developed in Canada, Cuba, and other places. The other material needed for the process is hydrogen, which today is mostly produced from fossil fuels but can easily be made other ways, including electrolysis of water using carbon-free electricity such as solar or wind power.

    The researchers tested their system on a real example of mixed recycled plastic, producing promising results. But more testing will be needed on a greater variety of mixed waste streams to determine how much fouling takes place from various contaminants in the material — such as inks, glues, and labels attached to the plastic containers, or other nonplastic materials that get mixed in with the waste — and how that affects the long-term stability of the process.

    Together with collaborators at NREL, the MIT team is also continuing to study the economics of the system, and analyzing how it can fit into today’s systems for handling plastic and mixed waste streams. “We don’t have all the answers yet,” Román-Leshkov says, but preliminary analysis looks promising.

    The research team included Amani Ebrahim and Simone Bare at the SLAC National Accelerator Laboratory; Jie Zhu, Anna Brenner, Griffin Drake and Julie Rorrer at MIT; and Greg Beckham at the National Renewable Energy Laboratory. The work was supported by the U.S. Department of Energy (DoE), the Swiss National Science Foundation, and the DoE’s Office of Energy Efficiency and Renewable Energy, Advanced Manufacturing Office (AMO), and Bioenergy Technologies Office (BETO), as part of the the Bio-Optimized Technologies to keep Thermoplastics out of Landfills and the Environment (BOTTLE) Consortium. More

  • in

    Professor Emeritus Richard “Dick” Eckaus, who specialized in development economics, dies at 96

    Richard “Dick” Eckaus, Ford Foundation International Professor of Economics, emeritus, in the Department of Economics, died on Sept. 11 in Boston. He was 96 years old.

    Eckaus was born in Kansas City, Missouri on April 30, 1926, the youngest of three children to parents who had emigrated from Lithuania. His father, Julius Eckaus, was a tailor, and his mother, Bessie (Finkelstein) Eckaus helped run the business. The family struggled to make ends meet financially but academic success offered Eckaus a way forward.

    He graduated from Westport High School, joined the United States Navy, and was awarded a college scholarship via the V-12 Navy College Training Program during World War II to study electrical engineering at Iowa State University. After graduating in 1944, Eckaus served on a base in New York State until he was discharged in 1946 as lieutenant junior grade.

    He attended Washington University in St. Louis, Missouri, on the GI Bill, graduating in 1948 with a master’s degree in economics, before relocating to Boston and serving as instructor of economics at Babson Institute, and then assistant and associate professor of economics at Brandeis University from 1951 to 1962. He concurrently earned a PhD in economics from MIT in 1954.

    The following year, the American Economic Review published “The Factor Proportions Problem in Economic Development,” a paper written by Eckaus that remained part of the macroeconomics canon for decades. He returned to MIT in 1962 and went on to teach development economics to generations of MIT students, serving as head of the department from 1986 to 1990 and continuing to work there for the remainder of his career.

    The development economist Paul Rosenstein-Rodan (1902-85), Eckaus’ mentor at MIT, took him to live and work first in Italy in 1954 and then in India in 1961. These stints helping governments abroad solidified Eckaus’ commitment to not only excelling in the field, but also creating opportunities for colleagues and students to contribute as well — occasionally in conjunction with the World Bank.

    Longtime colleague Abhijit Banerjee, a Nobel laureate, Ford Foundation International Professor of Economics, and director of the Abdul Latif Jameel Poverty Action Lab at MIT, recalls reading a reprint of Eckaus’ 1955 paper as an undergraduate in India. When he subsequently arrived at MIT as a doctoral candidate, he remembers “trying to tread lightly and not to take up too much space,” around the senior economist. “In fact, he made me feel so welcome,” Banerjee says. “He was both an outstanding scholar and someone who had the modesty and generosity to make younger scholars feel valued and heard.”

    The field of development economics provided Eckaus with a broad, powerful platform to work with governments in developing countries — including India, Egypt, Bhutan, Mexico, and Portugal — to set up economic systems. His development planning models helped governments to forecast where their economies were headed and how public policies could be implemented to shift or accelerate the direction.

    The Government of Portugal awarded Eckaus the Great-Cross of the Order of Prince Henry the Navigator after he brought teams from MIT to assist the country in its peaceful transition to democracy following the 1974 Carnation Revolution. Initiated at the request of the Portuguese Central Bank, these graduate students became some of the most prominent economists of their generation in America. They include Paul Krugman, Andrew Abel, Jeremy I. Bulow, and Kenneth Rogoff.

    His colleague for five decades, Paul Joskow, the Elizabeth and James Killian Professor of Economics at MIT, says that’s no surprise. “He was a real rock of the economics department. He deeply cared about the graduate students and younger faculty. He was a very supportive person.”

    Eckaus was also deeply interested in economic aspects of energy and environment, and in 1991 was instrumental in the formation of the MIT Joint Program on the Science and Policy of Global Change, a program that integrates the natural and social sciences in analysis of global climate threat. As Joint Program co-founder Henry Jacoby observes, “Dick provided crucial ideas as to how that kind of interdisciplinary work might be done at MIT. He was already 65 at the time, and continued for three decades to be active in guiding the research and analysis.”

    Although Eckaus retired officially in 1996, he continued to attend weekly faculty lunches, conduct research, mentor colleagues, and write papers related to climate change and the energy crisis. He leaves behind a trove of more than 100 published papers and eight authored and co-authored books.

    “He was continuously retooling himself and creating new interests. I was impressed by his agility of mind and his willingness to shift to new areas,” says his oldest living friend and peer, Jagdish Bhagwati, Columbia University professor of economics, law, and international relations, emeritus, and director of the Raj Center on Indian Economic Policies. “In their early career, economists usually write short theoretical articles that make large points, and Dick did that with two seminal articles in the leading professional journals of the time, the Quarterly Journal of Economics and the American Economic Review. Then, he shifted his focus to building large computable models. He also diversified by working in an advisory capacity in countries as diverse as Portugal and India. He was a ‘complete’ economist who straddled all styles of economics with distinction.” 

    Eckaus is survived by his beloved wife of 32 years Patricia Leahy Meaney of Brookline, Massachusetts. The two traveled the world, hiked the Alps, and collected pre-Columbian and contemporary art. He is lovingly remembered by his daughter Susan Miller; his step-son James Meaney (Bruna); step-daughter Caitlin Meaney Burrows (Lee); and four grandchildren, Chloe Burrows, Finley Burrows, Brandon Meaney, and Maria Sophia Meaney.

    In lieu of flowers, please consider a donation in Eckaus’ name to MIT Economics (77 Massachusetts Ave., Building E52-300, Cambridge, MA 02139). A memorial in his honor will be held later this year. More

  • in

    Processing waste biomass to reduce airborne emissions

    To prepare fields for planting, farmers the world over often burn corn stalks, rice husks, hay, straw, and other waste left behind from the previous harvest. In many places, the practice creates huge seasonal clouds of smog, contributing to air pollution that kills 7 million people globally a year, according to the World Health Organization.

    Annually, $120 billion worth of crop and forest residues are burned in the open worldwide — a major waste of resources in an energy-starved world, says Kevin Kung SM ’13, PhD ’17. Kung is working to transform this waste biomass into marketable products — and capitalize on a billion-dollar global market — through his MIT spinoff company, Takachar.

    Founded in 2015, Takachar develops small-scale, low-cost, portable equipment to convert waste biomass into solid fuel using a variety of thermochemical treatments, including one known as oxygen-lean torrefaction. The technology emerged from Kung’s PhD project in the lab of Ahmed Ghoniem, the Ronald C. Crane (1972) Professor of Mechanical Engineering at MIT.

    Biomass fuels, including wood, peat, and animal dung, are a major source of carbon emissions — but billions of people rely on such fuels for cooking, heating, and other household needs. “Currently, burning biomass generates 10 percent of the primary energy used worldwide, and the process is used largely in rural, energy-poor communities. We’re not going to change that overnight. There are places with no other sources of energy,” Ghoniem says.

    What Takachar’s technology provides is a way to use biomass more cleanly and efficiently by concentrating the fuel and eliminating contaminants such as moisture and dirt, thus creating a “clean-burning” fuel — one that generates less smoke. “In rural communities where biomass is used extensively as a primary energy source, torrefaction will address air pollution head-on,” Ghoniem says.

    Thermochemical treatment densifies biomass at elevated temperatures, converting plant materials that are typically loose, wet, and bulky into compact charcoal. Centralized processing plants exist, but collection and transportation present major barriers to utilization, Kung says. Takachar’s solution moves processing into the field: To date, Takachar has worked with about 5,500 farmers to process 9,000 metric tons of crops.

    Takachar estimates its technology has the potential to reduce carbon dioxide equivalent emissions by gigatons per year at scale. (“Carbon dioxide equivalent” is a measure used to gauge global warming potential.) In recognition, in 2021 Takachar won the first-ever Earthshot Prize in the clean air category, a £1 million prize funded by Prince William and Princess Kate’s Royal Foundation.

    Roots in Kenya

    As Kung tells the story, Takachar emerged from a class project that took him to Kenya — which explains the company’s name, a combination of takataka, which mean “trash” in Swahili, and char, for the charcoal end product.

    It was 2011, and Kung was at MIT as a biological engineering grad student focused on cancer research. But “MIT gives students big latitude for exploration, and I took courses outside my department,” he says. In spring 2011, he signed up for a class known as 15.966 (Global Health Delivery Lab) in the MIT Sloan School of Management. The class brought Kung to Kenya to work with a nongovernmental organization in Nairobi’s Kibera, the largest urban slum in Africa.

    “We interviewed slum households for their views on health, and that’s when I noticed the charcoal problem,” Kung says. The problem, as Kung describes it, was that charcoal was everywhere in Kibera — piled up outside, traded by the road, and used as the primary fuel, even indoors. Its creation contributed to deforestation, and its smoke presented a serious health hazard.

    Eager to address this challenge, Kung secured fellowship support from the MIT International Development Initiative and the Priscilla King Gray Public Service Center to conduct more research in Kenya. In 2012, he formed Takachar as a team and received seed money from the MIT IDEAS Global Challenge, MIT Legatum Center for Development and Entrepreneurship, and D-Lab to produce charcoal from household organic waste. (This work also led to a fertilizer company, Safi Organics, that Kung founded in 2016 with the help of MIT IDEAS. But that is another story.)

    Meanwhile, Kung had another top priority: finding a topic for his PhD dissertation. Back at MIT, he met Alexander Slocum, the Walter M. May and A. Hazel May Professor of Mechanical Engineering, who on a long walk-and-talk along the Charles River suggested he turn his Kenya work into a thesis. Slocum connected him with Robert Stoner, deputy director for science and technology at the MIT Energy Initiative (MITEI) and founding director of MITEI’s Tata Center for Technology and Design. Stoner in turn introduced Kung to Ghoniem, who became his PhD advisor, while Slocum and Stoner joined his doctoral committee.

    Roots in MIT lab

    Ghoniem’s telling of the Takachar story begins, not surprisingly, in the lab. Back in 2010, he had a master’s student interested in renewable energy, and he suggested the student investigate biomass. That student, Richard Bates ’10, SM ’12, PhD ’16, began exploring the science of converting biomass to more clean-burning charcoal through torrefaction.

    Most torrefaction (also known as low-temperature pyrolysis) systems use external heating sources, but the lab’s goal, Ghoniem explains, was to develop an efficient, self-sustained reactor that would generate fewer emissions. “We needed to understand the chemistry and physics of the process, and develop fundamental scaling models, before going to the lab to build the device,” he says.

    By the time Kung joined the lab in 2013, Ghoniem was working with the Tata Center to identify technology suitable for developing countries and largely based on renewable energy. Kung was able to secure a Tata Fellowship and — building on Bates’ research — develop the small-scale, practical device for biomass thermochemical conversion in the field that launched Takachar.

    This device, which was patented by MIT with inventors Kung, Ghoniem, Stoner, MIT research scientist Santosh Shanbhogue, and Slocum, is self-contained and scalable. It burns a little of the biomass to generate heat; this heat bakes the rest of the biomass, releasing gases; the system then introduces air to enable these gases to combust, which burns off the volatiles and generates more heat, keeping the thermochemical reaction going.

    “The trick is how to introduce the right amount of air at the right location to sustain the process,” Ghoniem explains. “If you put in more air, that will burn the biomass. If you put in less, there won’t be enough heat to produce the charcoal. That will stop the reaction.”

    About 10 percent of the biomass is used as fuel to support the reaction, Kung says, adding that “90 percent is densified into a form that’s easier to handle and utilize.” He notes that the research received financial support from the Abdul Latif Jameel Water and Food Systems Lab and the Deshpande Center for Technological Innovation, both at MIT. Sonal Thengane, another postdoc in Ghoniem’s lab, participated in the effort to scale up the technology at the MIT Bates Lab (no relation to Richard Bates).

    The charcoal produced is more valuable per ton and easier to transport and sell than biomass, reducing transportation costs by two-thirds and giving farmers an additional income opportunity — and an incentive not to burn agricultural waste, Kung says. “There’s more income for farmers, and you get better air quality.”

    Roots in India

    When Kung became a Tata Fellow, he joined a program founded to take on the biggest challenges of the developing world, with a focus on India. According to Stoner, Tata Fellows, including Kung, typically visit India twice a year and spend six to eight weeks meeting stakeholders in industry, the government, and in communities to gain perspective on their areas of study.

    “A unique part of Tata is that you’re considering the ecosystem as a whole,” says Kung, who interviewed hundreds of smallholder farmers, met with truck drivers, and visited existing biomass processing plants during his Tata trips to India. (Along the way, he also connected with Indian engineer Vidyut Mohan, who became Takachar’s co-founder.)

    “It was very important for Kevin to be there walking about, experimenting, and interviewing farmers,” Stoner says. “He learned about the lives of farmers.”

    These experiences helped instill in Kung an appreciation for small farmers that still drives him today as Takachar rolls out its first pilot programs, tinkers with the technology, grows its team (now up to 10), and endeavors to build a revenue stream. So, while Takachar has gotten a lot of attention and accolades — from the IDEAS award to the Earthshot Prize — Kung says what motivates him is the prospect of improving people’s lives.

    The dream, he says, is to empower communities to help both the planet and themselves. “We’re excited about the environmental justice perspective,” he says. “Our work brings production and carbon removal or avoidance to rural communities — providing them with a way to convert waste, make money, and reduce air pollution.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More

  • in

    3 Questions: Janelle Knox-Hayes on producing renewable energy that communities want

    Wind power accounted for 8 percent of U.S. electricity consumption in 2020, and is growing rapidly in the country’s energy portfolio. But some projects, like the now-defunct Cape Wind proposal for offshore power in Massachusetts, have run aground due to local opposition. Are there ways to avoid this in the future?

    MIT professors Janelle Knox-Hayes and Donald Sadoway think so. In a perspective piece published today in the journal Joule, they and eight other professors call for a new approach to wind-power deployment, one that engages communities in a process of “co-design” and adapts solutions to local needs. That process, they say, could spur additional creativity in renewable energy engineering, while making communities more amenable to existing technologies. In addition to Knox-Hayes and Sadoway, the paper’s co-authors are Michael J. Aziz of Harvard University; Dennice F. Gayme of Johns Hopkins University; Kathryn Johnson of the Colorado School of Mines; Perry Li of the University of Minnesota; Eric Loth of the University of Virginia; Lucy Y. Pao of the University of Colorado; Jessica Smith of the Colorado School of Mines; and Sonya Smith of Howard University.

    Knox-Hayes is the Lister Brothers Associate Professor of Economic Geography and Planning in MIT’s Department of Urban Studies and Planning, and an expert on the social and political context of renewable energy adoption; Sadoway is the John F. Elliott Professor of Materials Chemistry in MIT’s Department of Materials Science and Engineering, and a leading global expert on developing new forms of energy storage. MIT News spoke with Knox-Hayes about the topic.

    Q: What is the core problem you are addressing in this article?

    A: It is problematic to act as if technology can only be engineered in a silo and then delivered to society. To solve problems like climate change, we need to see technology as a socio-technical system, which is integrated from its inception into society. From a design standpoint, that begins with conversations, values assessments, and understanding what communities need.  If we can do that, we will have a much easier time delivering the technology in the end.

    What we have seen in the Northeast, in trying to meet our climate objectives and energy efficiency targets, is that we need a lot of offshore wind, and a lot of projects have stalled because a community was saying “no.” And part of the reason communities refuse projects is because they that they’ve never been properly consulted. What form does the technology take, and how would it operate within a community? That conversation can push the boundaries of engineering.

    Q: The new paper makes the case for a new practice of “co-design” in the field of renewable energy. You call this the “STEP” process, standing for all the socio-technical-political-economic issues that an engineering project might encounter. How would you describe the STEP idea? And to what extent would industry be open to new attempts to design an established technology?

    A: The idea is to bring together all these elements in an interdisciplinary process, and engage stakeholders. The process could start with a series of community forums where we bring everyone together, and do a needs assessment, which is a common practice in planning. We might see that offshore wind energy needs to be considered in tandem with the local fishing industry, or servicing the installations, or providing local workforce training. The STEP process allows us to take a step back, and start with planners, policymakers, and community members on the ground.

    It is also about changing the nature of research and practice and teaching, so that students are not just in classrooms, they are also learning to work with communities. I think formalizing that piece is important. We are starting now to really feel the impacts of climate change, so we have to confront the reality of breaking through political boundaries, even in the United States. That is the only way to make this successful, and that comes back to how can technology be co-designed.

    At MIT, innovation is the spirit of the endeavor, and that is why MIT has so many industry partners engaged in initiatives like MITEI [the MIT Energy Initiative] and the Climate Consortium. The value of the partnership is that MIT pushes the boundaries of what is possible. It is the idea that we can advance and we can do something incredible, we can innovate the future. What we are suggesting with this work is that innovation isn’t something that happens exclusively in a laboratory, but something that is very much built in partnership with communities and other stakeholders.

    Q: How much does this approach also apply to solar power, as the other leading type of renewable energy? It seems like communities also wrestle with where to locate solar arrays, or how to compensate homeowners, communities, and other solar hosts for the power they generate.

    A: I would not say solar has the same set of challenges, but rather that renewable technologies face similar challenges. With solar, there are also questions of access and siting. Another big challenge is to create financing models that provide value and opportunity at different scales. For example, is solar viable for tenants in multi-family units who want to engage with clean energy? This is a similar question for micro-wind opportunities for buildings. With offshore wind, a restriction is that if it is within sightlines, it might be problematic. But there are exciting technologies that have enabled deep wind, or the establishment of floating turbines up to 50 kilometers offshore. Storage solutions such as hydro-pneumatic energy storage, gravity energy storage or buoyancy storage can help maintain the transmission rate while reducing the number of transmission lines needed.

    In a lot of communities, the reality of renewables is that if you can generate your own energy, you can establish a level of security and resilience that feeds other benefits. 

    Nevertheless, as demonstrated in the Cape Wind case, technology [may be rejected] unless a community is involved from the beginning. Community involvement also creates other opportunities. Suppose, for example, that high school students are working as interns on renewable energy projects with engineers at great universities from the region. This provides a point of access for families and allows them to take pride in the systems they create.  It gives a further sense of purpose to the technology system, and vests the community in the system’s success. It is the difference between, “It was delivered to me,” and “I built it.” For researchers the article is a reminder that engineering and design are more successful if they are inclusive. Engineering and design processes are also meant to be accessible and fun. More

  • in

    Passive cooling system could benefit off-grid locations

    As the world gets warmer, the use of power-hungry air conditioning systems is projected to increase significantly, putting a strain on existing power grids and bypassing many locations with little or no reliable electric power. Now, an innovative system developed at MIT offers a way to use passive cooling to preserve food crops and supplement conventional air conditioners in buildings, with no need for power and only a small need for water.

    The system, which combines radiative cooling, evaporative cooling, and thermal insulation in a slim package that could resemble existing solar panels, can provide up to about 19 degrees Fahrenheit (9.3 degrees Celsius) of cooling from the ambient temperature, enough to permit safe food storage for about 40 percent longer under very humid conditions. It could triple the safe storage time under dryer conditions.

    The findings are reported today in the journal Cell Reports Physical Science, in a paper by MIT postdoc Zhengmao Lu, Arny Leroy PhD ’21, professors Jeffrey Grossman and Evelyn Wang, and two others. While more research is needed in order to bring down the cost of one key component of the system, the researchers say that eventually such a system could play a significant role in meeting the cooling needs of many parts of the world where a lack of electricity or water limits the use of conventional cooling systems.

    The system cleverly combines previous standalone cooling designs that each provide limited amounts of cooling power, in order to produce significantly more cooling overall — enough to help reduce food losses from spoilage in parts of the world that are already suffering from limited food supplies. In recognition of that potential, the research team has been partly supported by MIT’s Abdul Latif Jameel Water and Food Systems Lab.

    “This technology combines some of the good features of previous technologies such as evaporative cooling and radiative cooling,” Lu says. By using this combination, he says, “we show that you can achieve significant food life extension, even in areas where you have high humidity,” which limits the capabilities of conventional evaporative or radiative cooling systems.

    In places that do have existing air conditioning systems in buildings, the new system could be used to significantly reduce the load on these systems by sending cool water to the hottest part of the system, the condenser. “By lowering the condenser temperature, you can effectively increase the air conditioner efficiency, so that way you can potentially save energy,” Lu says.

    Other groups have also been pursuing passive cooling technologies, he says, but “by combining those features in a synergistic way, we are now able to achieve high cooling performance, even in high-humidity areas where previous technology generally cannot perform well.”

    The system consists of three layers of material, which together provide cooling as water and heat pass through the device. In practice, the device could resemble a conventional solar panel, but instead of putting out electricity, it would directly provide cooling, for example by acting as the roof of a food storage container. Or, it could be used to send chilled water through pipes to cool parts of an existing air conditioning system and improve its efficiency. The only maintenance required is adding water for the evaporation, but the consumption is so low that this need only be done about once every four days in the hottest, driest areas, and only once a month in wetter areas.

    The top layer is an aerogel, a material consisting mostly of air enclosed in the cavities of a sponge-like structure made of polyethylene. The material is highly insulating but freely allows both water vapor and infrared radiation to pass through. The evaporation of water (rising up from the layer below) provides some of the cooling power, while the infrared radiation, taking advantage of the extreme transparency of Earth’s atmosphere at those wavelengths, radiates some of the heat straight up through the air and into space — unlike air conditioners, which spew hot air into the immediate surrounding environment.

    Below the aerogel is a layer of hydrogel — another sponge-like material, but one whose pore spaces filled with water rather than air. It’s similar to material currently used commercially for products such as cooling pads or wound dressings. This provides the water source for evaporative cooling, as water vapor forms at its surface and the vapor passes up right through the aerogel layer and out to the environment.

    Below that, a mirror-like layer reflects any incoming sunlight that has reached it, sending it back up through the device rather than letting it heat up the materials and thus reducing their thermal load. And the top layer of aerogel, being a good insulator, is also highly solar-reflecting, limiting the amount of solar heating of the device, even under strong direct sunlight.

    “The novelty here is really just bringing together the radiative cooling feature, the evaporative cooling feature, and also the thermal insulation feature all together in one architecture,” Lu explains. The system was tested, using a small version, just 4 inches across, on the rooftop of a building at MIT, proving its effectiveness even during suboptimal weather conditions, Lu says, and achieving 9.3 C of cooling (18.7 F).

    “The challenge previously was that evaporative materials often do not deal with solar absorption well,” Lu says. “With these other materials, usually when they’re under the sun, they get heated, so they are unable to get to high cooling power at the ambient temperature.”

    The aerogel material’s properties are a key to the system’s overall efficiency, but that material at present is expensive to produce, as it requires special equipment for critical point drying (CPD) to remove solvents slowly from the delicate porous structure without damaging it. The key characteristic that needs to be controlled to provide the desired characteristics is the size of the pores in the aerogel, which is made by mixing the polyethylene material with solvents, allowing it to set like a bowl of Jell-O, and then getting the solvents out of it. The research team is currently exploring ways of either making this drying process more inexpensive, such as by using freeze-drying, or finding alternative materials that can provide the same insulating function at lower cost, such as membranes separated by an air gap.

    While the other materials used in the system are readily available and relatively inexpensive, Lu says, “the aerogel is the only material that’s a product from the lab that requires further development in terms of mass production.” And it’s impossible to predict how long that development might take before this system can be made practical for widespread use, he says.

    The research team included Lenan Zhang of MIT’s Department of Mechanical Engineering and Jatin Patil of the Department of Materials Science and Engineering. More