More stories

  • in

    To decarbonize the chemical industry, electrify it

    The chemical industry is the world’s largest industrial energy consumer and the third-largest source of industrial emissions, according to the International Energy Agency. In 2019, the industrial sector as a whole was responsible for 24 percent of global greenhouse gas emissions. And yet, as the world races to find pathways to decarbonization, the chemical industry has been largely untouched.

    “When it comes to climate action and dealing with the emissions that come from the chemical sector, the slow pace of progress is partly technical and partly driven by the hesitation on behalf of policymakers to overly impact the economic competitiveness of the sector,” says Dharik Mallapragada, a principal research scientist at the MIT Energy Initiative.

    With so many of the items we interact with in our daily lives — from soap to baking soda to fertilizer — deriving from products of the chemical industry, the sector has become a major source of economic activity and employment for many nations, including the United States and China. But as the global demand for chemical products continues to grow, so do the industry’s emissions.

    New sustainable chemical production methods need to be developed and deployed and current emission-intensive chemical production technologies need to be reconsidered, urge the authors of a new paper published in Joule. Researchers from DC-MUSE, a multi-institution research initiative, argue that electrification powered by low-carbon sources should be viewed more broadly as a viable decarbonization pathway for the chemical industry. In this paper, they shine a light on different potential methods to do just that.

    “Generally, the perception is that electrification can play a role in this sector — in a very narrow sense — in that it can replace fossil fuel combustion by providing the heat that the combustion is providing,” says Mallapragada, a member of DC-MUSE. “What we argue is that electrification could be much more than that.”

    The researchers outline four technological pathways — ranging from more mature, near-term options to less technologically mature options in need of research investment — and present the opportunities and challenges associated with each.

    The first two pathways directly replace fossil fuel-produced heat (which facilitates the reactions inherent in chemical production) with electricity or electrochemically generated hydrogen. The researchers suggest that both options could be deployed now and potentially be used to retrofit existing facilities. Electrolytic hydrogen is also highlighted as an opportunity to replace fossil fuel-produced hydrogen (a process that emits carbon dioxide) as a critical chemical feedstock. In 2020, fossil-based hydrogen supplied nearly all hydrogen demand (90 megatons) in the chemical and refining industries — hydrogen’s largest consumers.

    The researchers note that increasing the role of electricity in decarbonizing the chemical industry will directly affect the decarbonization of the power grid. They stress that to successfully implement these technologies, their operation must coordinate with the power grid in a mutually beneficial manner to avoid overburdening it. “If we’re going to be serious about decarbonizing the sector and relying on electricity for that, we have to be creative in how we use it,” says Mallapragada. “Otherwise we run the risk of having addressed one problem, while creating a massive problem for the grid in the process.”

    Electrified processes have the potential to be much more flexible than conventional fossil fuel-driven processes. This can reduce the cost of chemical production by allowing producers to shift electricity consumption to times when the cost of electricity is low. “Process flexibility is particularly impactful during stressed power grid conditions and can help better accommodate renewable generation resources, which are intermittent and are often poorly correlated with daily power grid cycles,” says Yury Dvorkin, an associate research professor at the Johns Hopkins Ralph O’Connor Sustainable Energy Institute. “It’s beneficial for potential adopters because it can help them avoid consuming electricity during high-price periods.”

    Dvorkin adds that some intermediate energy carriers, such as hydrogen, can potentially be used as highly efficient energy storage for day-to-day operations and as long-term energy storage. This would help support the power grid during extreme events when traditional and renewable generators may be unavailable. “The application of long-duration storage is of particular interest as this is a key enabler of a low-emissions society, yet not widespread beyond pumped hydro units,” he says. “However, as we envision electrified chemical manufacturing, it is important to ensure that the supplied electricity is sourced from low-emission generators to prevent emissions leakages from the chemical to power sector.” 

    The next two pathways introduced — utilizing electrochemistry and plasma — are less technologically mature but have the potential to replace energy- and carbon-intensive thermochemical processes currently used in the industry. By adopting electrochemical processes or plasma-driven reactions instead, chemical transformations can occur at lower temperatures and pressures, potentially enhancing efficiency. “These reaction pathways also have the potential to enable more flexible, grid-responsive plants and the deployment of modular manufacturing plants that leverage distributed chemical feedstocks such as biomass waste — further enhancing sustainability in chemical manufacturing,” says Miguel Modestino, the director of the Sustainable Engineering Initiative at the New York University Tandon School of Engineering.

    A large barrier to deep decarbonization of chemical manufacturing relates to its complex, multi-product nature. But, according to the researchers, each of these electricity-driven pathways supports chemical industry decarbonization for various feedstock choices and end-of-life disposal decisions. Each should be evaluated in comprehensive techno-economic and environmental life cycle assessments to weigh trade-offs and establish suitable cost and performance metrics.

    Regardless of the pathway chosen, the researchers stress the need for active research and development and deployment of these technologies. They also emphasize the importance of workforce training and development running in parallel to technology development. As André Taylor, the director of DC-MUSE, explains, “There is a healthy skepticism in the industry regarding electrification and adoption of these technologies, as it involves processing chemicals in a new way.” The workforce at different levels of the industry hasn’t necessarily been exposed to ideas related to the grid, electrochemistry, or plasma. The researchers say that workforce training at all levels will help build greater confidence in these different solutions and support customer-driven industry adoption.

    “There’s no silver bullet, which is kind of the standard line with all climate change solutions,” says Mallapragada. “Each option has pros and cons, as well as unique advantages. But being aware of the portfolio of options in which you can use electricity allows us to have a better chance of success and of reducing emissions — and doing so in a way that supports grid decarbonization.”

    This work was supported, in part, by the Alfred P. Sloan Foundation. More

  • in

    Chess players face a tough foe: air pollution

    Here’s something else chess players need to keep in check: air pollution.

    That’s the bottom line of a newly published study co-authored by an MIT researcher, showing that chess players perform objectively worse and make more suboptimal moves, as measured by a computerized analysis of their games, when there is more fine particulate matter in the air.

    More specifically, given a modest increase in fine particulate matter, the probability that chess players will make an error increases by 2.1 percentage points, and the magnitude of those errors increases by 10.8 percent. In this setting, at least, cleaner air leads to clearer heads and sharper thinking.

    “We find that when individuals are exposed to higher levels of air pollution, they make more more mistakes, and they make larger mistakes,” says Juan Palacios, an economist in MIT’s Sustainable Urbanization Lab, and co-author of a newly published paper detailing the study’s findings.

    The paper, “Indoor Air Quality and Strategic Decision-Making,” appears today in advance online form in the journal Management Science. The authors are Steffen Künn, an associate professor in the School of Business and Economics at Maastricht University, the Netherlands; Palacios, who is head of research in the Sustainable Urbanization Lab, in MIT’s Department of Urban Studies and Planning (DUSP); and Nico Pestel, an associate professor in the School of Business and Economics at Maastricht University.

    The toughest foe yet?

    Fine particulate matter refers to tiny particles 2.5 microns or less in diameter, notated as PM2.5. They are often associated with burning matter — whether through internal combustion engines in autos, coal-fired power plants, forest fires, indoor cooking through open fires, and more. The World Health Organization estimates that air pollution leads to over 4 million premature deaths worldwide every year, due to cancer, cardiovascular problems, and other illnesses.

    Scholars have produced many studies exploring the effects of air pollution on cognition. The current study adds to that literature by analyzing the subject in a particularly controlled setting. The researchers studied the performance of 121 chess players in three seven-round tournaments in Germany in 2017, 2018, and 2019, comprising more than 30,000 chess moves. The scholars used three web-connected sensors inside the tournament venue to measure carbon dioxide, PM2.5 concentrations, and temperature, all of which can be affected by external conditions, even in an indoor setting. Because each tournament lasted eight weeks, it was possible to examine how air-quality changes related to changes in player performance.

    In a replication exercise, the authors found the same impacts of air pollution on some of the strongest players in the history of chess using data from 20 years of games from the first division of the German chess league. 

    To evaluate the matter of performance of players, meanwhile, the scholars used software programs that assess each move made in each chess match, identify optimal decisions, and flag significant errors.

    During the tournaments, PM2.5 concentrations ranged from 14 to 70 micrograms per cubic meter of air, levels of exposure commonly found in cities in the U.S. and elsewhere. The researchers examined and ruled out alternate potential explanations for the dip in player performance, such as increased noise. They also found that carbon dioxide and temperature changes did not correspond to performance changes. Using the standardized ratings chess players earn, the scholars also accounted for the quality of opponents each player faced. Ultimately, the analysis using the plausibly random variation in pollution driven by changes in wind direction confirms that the findings are driven by the direct exposure to air particles.

    “It’s pure random exposure to air pollution that is driving these people’s performance,” Palacios says. “Against comparable opponents in the same tournament round, being exposed to different levels of air quality makes a difference for move quality and decision quality.”

    The researchers also found that when air pollution was worse, the chess players performed even more poorly when under time constraints. The tournament rules mandated that 40 moves had to be made within 110 minutes; for moves 31-40 in all the matches, an air pollution increase of 10 micrograms per cubic meter led to an increased probability of error of 3.2 percent, with the magnitude of those errors increasing by 17.3 percent.

    “We find it interesting that those mistakes especially occur in the phase of the game where players are facing time pressure,” Palacios says. “When these players do not have the ability to compensate [for] lower cognitive performance with greater deliberation, [that] is where we are observing the largest impacts.”

    “You can live miles away and be affected”

    Palacios emphasizes that, as the study indicates, air pollution may affect people in settings where they might not think it makes a difference.

    “It’s not like you have to live next to a power plant,” Palacios says. “You can live miles away and be affected.”

    And while the focus of this particular study is tightly focused on chess players, the authors write in the paper that the findings have “strong implications for high-skilled office workers,” who might also be faced with tricky cognitive tasks in conditions of variable air pollution. In this sense, Palacios says, “The idea is to provide accurate estimates to policymakers who are making difficult decisions about cleaning up the environment.”

    Indeed, Palacios observes, the fact that even chess players — who spend untold hours preparing themselves for all kinds of scenarios they may face in matches — can perform worse when air pollution rises suggests that a similar problem could affect people cognitively in many other settings.

    “There are more and more papers showing that there is a cost with air pollution, and there is a cost for more and more people,” Palacios says. “And this is just one example showing that even for these very [excellent] chess players, who think they can beat everything — well, it seems that with air pollution, they have an enemy who harms them.”

    Support for the study was provided, in part, by the Graduate School of Business and Economics at Maastricht, and the Institute for Labor Economics in Bonn, Germany. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Moving water and earth

    As a river cuts through a landscape, it can operate like a conveyer belt, moving truckloads of sediment over time. Knowing how quickly or slowly this sediment flows can help engineers plan for the downstream impact of restoring a river or removing a dam. But the models currently used to estimate sediment flow can be off by a wide margin.

    An MIT team has come up with a better formula to calculate how much sediment a fluid can push across a granular bed — a process known as bed load transport. The key to the new formula comes down to the shape of the sediment grains.

    It may seem intuitive: A smooth, round stone should skip across a river bed faster than an angular pebble. But flowing water also pushes harder on the angular pebble, which could erase the round stone’s advantage. Which effect wins? Existing sediment transport models surprisingly don’t offer an answer, mainly because the problem of measuring grain shape is too unwieldy: How do you quantify a pebble’s contours?

    The MIT researchers found that instead of considering a grain’s exact shape, they could boil the concept of shape down to two related properties: friction and drag. A grain’s drag, or resistance to fluid flow, relative to its internal friction, the resistance to sliding past other grains, can provide an easy way to gauge the effects of a grain’s shape.

    When they incorporated this new mathematical measure of grain shape into a standard model for bed load transport, the new formula made predictions that matched experiments that the team performed in the lab.

    “Sediment transport is a part of life on Earth’s surface, from the impact of storms on beaches to the gravel nests in mountain streams where salmon lay their eggs,” the team writes of their new study, appearing today in Nature. “Damming and sea level rise have already impacted many such terrains and pose ongoing threats. A good understanding of bed load transport is crucial to our ability to maintain these landscapes or restore them to their natural states.”

    The study’s authors are Eric Deal, Santiago Benavides, Qiong Zhang, Ken Kamrin, and Taylor Perron of MIT, and Jeremy Venditti and Ryan Bradley of Simon Fraser University in Canada.

    Figuring flow

    Video of glass spheres (top) and natural river gravel (bottom) undergoing bed load transport in a laboratory flume, slowed down 17x relative to real time. Average grain diameter is about 5 mm. This video shows how rolling and tumbling natural grains interact with one another in a way that is not possible for spheres. What can’t be seen so easily is that natural grains also experience higher drag forces from the flowing water than spheres do.

    Credit: Courtesy of the researchers

    Previous item
    Next item

    Bed load transport is the process by which a fluid such as air or water drags grains across a bed of sediment, causing the grains to hop, skip, and roll along the surface as a fluid flows through. This movement of sediment in a current is what drives rocks to migrate down a river and sand grains to skip across a desert.

    Being able to estimate bed load transport can help scientists prepare for situations such as urban flooding and coastal erosion. Since the 1930s, one formula has been the go-to model for calculating bed load transport; it’s based on a quantity known as the Shields parameter, after the American engineer who originally derived it. This formula sets a relationship between the force of a fluid pushing on a bed of sediment, and how fast the sediment moves in response. Albert Shields incorporated certain variables into this formula, including the average size and density of a sediment’s grains — but not their shape.

    “People may have backed away from accounting for shape because it’s one of these very scary degrees of freedom,” says Kamrin, a professor of mechanical engineering at MIT. “Shape is not a single number.”

    And yet, the existing model has been known to be off by a factor of 10 in its predictions of sediment flow. The team wondered whether grain shape could be a missing ingredient, and if so, how the nebulous property could be mathematically represented.

    “The trick was to focus on characterizing the effect that shape has on sediment transport dynamics, rather than on characterizing the shape itself,” says Deal.

    “It took some thinking to figure that out,” says Perron, a professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “But we went back to derive the Shields parameter, and when you do the math, this ratio of drag to friction falls out.”

    Drag and drop

    Their work showed that the Shields parameter — which predicts how much sediment is transported — can be modified to include not just size and density, but also grain shape, and furthermore, that a grain’s shape can be simply represented by a measure of the grain’s drag and its internal friction. The math seemed to make sense. But could the new formula predict how sediment actually flows?

    To answer this, the researchers ran a series of flume experiments, in which they pumped a current of water through an inclined tank with a floor covered in sediment. They ran tests with sediment of various grain shapes, including beds of round glass beads, smooth glass chips, rectangular prisms, and natural gravel. They measured the amount of sediment that was transported through the tank in a fixed amount of time. They then determined the effect of each sediment type’s grain shape by measuring the grains’ drag and friction.

    For drag, the researchers simply dropped individual grains down through a tank of water and gathered statistics for the time it took the grains of each sediment type to reach the bottom. For instance, a flatter grain type takes a longer time on average, and therefore has greater drag, than a round grain type of the same size and density.

    To measure friction, the team poured grains through a funnel and onto a circular tray, then measured the resulting pile’s angle, or slope — an indication of the grains’ friction, or ability to grip onto each other.

    For each sediment type, they then worked the corresponding shape’s drag and friction into the new formula, and found that it could indeed predict the bedload transport, or the amount of moving sediment that the researchers measured in their experiments.

    The team says the new model more accurately represents sediment flow. Going forward, scientists and engineers can use the model to better gauge how a river bed will respond to scenarios such as sudden flooding from severe weather or the removal of a dam.

    “If you were trying to make a prediction of how fast all that sediment will get evacuated after taking a dam out, and you’re wrong by a factor of three or five, that’s pretty bad,” Perron says. “Now we can do a lot better.”

    This research was supported, in part, by the U.S. Army Research Laboratory. More

  • in

    Looking to the past to prepare for an uncertain future

    Aviva Intveld, an MIT senior majoring in Earth, atmospheric, and planetary sciences, is accustomed to city life. But despite hailing from metropolitan Los Angeles, she has always maintained a love for the outdoors.

    “Growing up in L.A., you just have a wealth of resources when it comes to beautiful environments,” she says, “but you’re also constantly living connected to the environment.” She developed a profound respect for the natural world and its effects on people, from the earthquakes that shook the ground to the wildfires that displaced inhabitants.

    “I liked the lifestyle that environmental science afforded,” Intveld recalls. “I liked the idea that you can make a career out of spending a huge amount of time in the field and exploring different parts of the world.”

    From the moment she arrived at MIT, Intveld threw herself into research on and off campus. During her first semester, she joined Terrascope, a program that encourages first-year students to tackle complex, real-world problems. Intveld and her cohort developed proposals to make recovery from major storms in Puerto Rico faster, more sustainable, and more equitable.

    Intveld also spent a semester studying drought stress in the lab of Assistant Professor David Des Marais, worked as a research assistant at a mineral sciences research lab back in L.A., and interned at the World Wildlife Fund. Most of her work focused on contemporary issues like food insecurity and climate change. “I was really interested in questions about today,” Intveld says.

    Her focus began to shift to the past when she interned as a research assistant at the Marine Geoarchaeology and Micropaleontology Lab at the University of Haifa. For weeks, she would spend eight hours a day hunched over a microscope, using a paintbrush to sort through grains of sand from the coastal town of Caesarea. She was looking for tiny spiral-shaped fossils of foraminifera, an organism that resides in seafloor sediments.

    These microfossils can reveal a lot about the environment in which they originated, including extreme weather events. By cataloging diverse species of foraminifera, Intveld was helping to settle a rather niche debate in the field of geoarchaeology: Did tsunamis destroy the harbor of Caesarea during the time of the ancient Romans?

    But in addition to figuring out if and when these natural disasters occurred, Intveld was interested in understanding how ancient communities prepared for and recovered from them. What methods did they use? Could those same methods be used today?

    Intveld’s research at the University of Haifa was part of the Onward Israel program, which offers young Jewish people the chance to participate in internships, academic study, and fellowships in Israel. Intveld describes the experience as a great opportunity to learn about the culture, history, and diversity of the Israeli community. The trip was also an excellent lesson in dealing with challenging situations.

    Intveld suffers from claustrophobia, but she overcame her fears to climb through the Bar Kokhba caves, and despite a cat allergy, she grew to adore the many stray cats that roam the streets of Haifa. “Sometimes you can’t let your physical limitations stop you from doing what you love,” she quips.

    Over the course of her research, Intveld has often found herself in difficult and even downright dangerous situations, all of which she looks back on with good humor. As part of an internship with the National Oceanic and Atmospheric Administration, she spent three months investigating groundwater in Homer, Alaska. While she was there, she learned to avoid poisonous plants out in the field, got lost bushwhacking, and was twice charged by a moose.

    These days, Intveld spends less time in the field and more time thinking about the ancient past. She works in the lab of Associate Professor David McGee, where her undergraduate thesis research focuses on reconstructing the paleoclimate and paleoecology of northeastern Mexico during the Early Holocene. To get an idea of what the Mexican climate looked like thousands of years ago, Intveld analyzes stable isotopes and trace elements in stalagmites taken from Mexican caves. By analyzing the isotopes of carbon and oxygen present in these stalagmites, which were formed over thousands of years from countless droplets of mineral-rich rainwater, Intveld can estimate the amount of rainfall and average temperature in a given time period.

    Intveld is primarily interested in how the area’s climate may have influenced human migration. “It’s very interesting to learn about the history of human motivation, what drives us to do what we do,” she explains. “What causes humans to move, and what causes us to stay?” So far, it seems the Mexican climate during the Early Holocene was quite inconsistent, with oscillating periods of wet and dry, but Intveld needs to conduct more research before drawing any definitive conclusions.

    Recent research has linked periods of drought in the geological record to periods of violence in the archaeological one, suggesting ancient humans often fought over access to water. “I think you can easily see the connections to stuff that we deal with today,” Intveld says, pointing out the parallels between paleolithic migration and today’s climate refugees. “We have to answer a lot of difficult questions, and one way that we can do so is by looking to see what earlier human communities did and what we can learn from them.”

    Intveld recognizes the impact of the past on our present and future in many other areas. She works as a tour guide for the List Visual Arts Center, where she educates people about public art on the MIT campus. “[Art] interested me as a way to experience history and learn about the story of different communities and people over time,” she says.

    Intveld is also unafraid to acknowledge the history of discrimination and exclusion in science. “Earth science has a big problem when it comes to inclusion and diversity,” she says. As a member of the EAPS Diversity, Equity and Inclusion Committee, she aims to make earth science more accessible.

    “Aviva has a clear drive to be at the front lines of geoscience research, connecting her work to the urgent environmental issues we’re all facing,” says McGee. “She also understands the critical need for our field to include more voices, more perspectives — ultimately making for better science.”

    After MIT, Intveld hopes to pursue an advanced degree in the field of sustainable mining. This past spring, she studied abroad at Imperial College London, where she took courses within the Royal School of Mines. As Intveld explains, mining is becoming crucial to sustainable energy. The rise of electric vehicles in places like California has increased the need for energy-critical elements like lithium and cobalt, but mining for these elements often does more harm than good. “The current mining complex is very environmentally destructive,” Intveld says.

    But Intveld hopes to take the same approach to mining she does with her other endeavors — acknowledging the destructive past to make way for a better future. More

  • in

    Sustainable supply chains put the customer first

    When we consider the supply chain, we typically think of factories, ships, trucks, and warehouses. Yet, the customer side is equally important, especially in efforts to make our distribution networks more sustainable. Customers are an untapped resource in building sustainability, says Josué C. Velázquez Martínez, a research scientist at MIT Center for Transportation and Logistics. 

    Velázquez Martínez, who is director of MIT’s Sustainable Supply Chain Lab, investigates how customer-facing supply chains can be made more environmentally and socially sustainable. One way is a Green Button project that explores how to optimize e-commerce delivery schedules to reduce carbon emissions and persuade customers to use less carbon-intensive four- or five-day shipping options instead of one or two days. Velázquez Martínez has also launched the MIT Low Income Firms Transformation (LIFT) Lab that is researching ways to improve micro-retailer supply chains in the developing world to provide owners with the necessary tools for survival.  

    “The definition of sustainable supply chain keeps evolving because things that were sustainable 20 to 30 years ago are not as sustainable now,” says Velázquez Martínez. “Today, there are more companies that are capturing information to build strategies for environmental, economic, and social sustainability. They are investing in alternative energy and other solutions to make the supply chain more environmentally friendly and are tracking their suppliers and identifying key vulnerabilities. A big part of this is an attempt to create fairer conditions for people who work in supply chains or are dependent on them.”

    Play video

    The move toward sustainable supply chain is being driven as much by people as by companies, whether they are playing the role of selective consumer or voting citizens. The consumer aspect is often overlooked, says Velázquez Martínez. “Consumers are the ones who move the supply chain. We are looking at how companies can provide transparency to involve customers in their sustainability strategy.” 

    Proposed solutions for sustainability are not always as effective as promised. Some fashion rental schemes fall into this category, says Velázquez Martínez. “There are many new rental companies that are trying to get more use out of clothes to offset the emissions associated with production. We recently researched the environmental impact of monthly subscription models where consumers pay a fee to receive clothes for a month before returning them, as well as peer-to-peer sharing models.” 

    The researchers found that while rental services generally have a lower carbon footprint than retail sales, hidden emissions from logistics played a surprisingly large role. “First, you need to deliver the clothes and pick them up, and there are high return rates,” says Velázquez Martínez. “When you factor in dry cleaning and packaging emissions, the rental models in some cases have a worse carbon footprint than buying new clothes.” Peer-to-peer sharing could be better, he adds, but that depends on how far the consumers travel to meet-up points. 

    Typically, says Velázquez Martínez, garment types that are frequently used are not well suited to rental models. “But for specialty clothes such as wedding dresses or prom dresses, it is better to rent.” 

    Waiting a few days to save the planet 

    Even before the pandemic, online retailing gained a second wind due to low-cost same- and next-day delivery options. While e-commerce may have its drawbacks as a contributor to social isolation and reduced competition, it has proven itself to be far more eco-friendly than brick-and-mortar shopping, not to mention a lot more convenient. Yet rapid deliveries are cutting into online-shopping’s carbon-cutting advantage.

    In 2019, MIT’s Sustainable Supply Chain Lab launched a Green Bottle project to study the rapid delivery phenomenon. The project has been “testing whether consumers would be willing to delay their e-commerce deliveries to reduce the environmental impact of fast shipping,” says Velázquez Martínez. “Many companies such as Walmart and Target have followed Amazon’s 2019 strategy of moving from two-day to same-day delivery. Instead of sending a fully loaded truck to a neighborhood every few days, they now send multiple trucks to that neighborhood every day, and there are more days when trucks are targeting each neighborhood. All this increases carbon emissions and makes it hard for shippers to consolidate. ”  

    Working with Coppel, one of Mexico’s largest retailers, the Green Button project inspired a related Consolidation Ecommerce Project that built a large-scale mathematical model to provide a strategy for consolidation. The model determined what delivery time window each neighborhood demands and then calculated the best day to deliver to each neighborhood to meet the desired window while minimizing carbon emissions. 

    No matter what mixture of delivery times was used, the consolidation model helped retailers schedule deliveries more efficiently. Yet, the biggest cuts in emissions emerged when customers were willing to wait several days.

    Play video

    “When we ran a month-long simulation comparing our model for four-to-five-day delivery with Coppel’s existing model for one- or two-day delivery, we saw savings in fuel consumption of over 50 percent on certain routes” says Velázquez Martínez. “This is huge compared to other strategies for squeezing more efficiency from the last-mile supply chain, such as routing optimization, where savings are close to 5 percent. The optimal solution depends on factors such as the capacity for consolidation, the frequency of delivery, the store capacity, and the impact on inbound operations.” 

    The researchers next set out to determine if customers could be persuaded to wait longer for deliveries. Considering that the price differential is low or nonexistent, this was a considerable challenge. Yet, the same day habit is only a few years old, and some consumers have come to realize they don’t always need rapid deliveries. “Some consumers who order by rapid delivery find they are too busy to open the packages right away,” says Velázquez Martínez.  

    Trees beat kilograms of CO2

    The researchers set out to find if consumers would be willing to sacrifice a bit of convenience if they knew they were helping to reduce climate change. The Green Button project tested different public outreach strategies. For one test group, they reported the carbon impact of delivery times in kilograms of carbon dioxide (CO2). Another group received the information expressed in terms of the energy required to recycle a certain amount of garbage. A third group learned about emissions in terms of the number of trees required to trap the carbon. “Explaining the impact in terms of trees led to almost 90 percent willing to wait another day or two,” says Velázquez Martínez. “This is compared to less than 40 percent for the group that received the data in kilograms of CO2.” 

    Another surprise was that there was no difference in response based on income, gender, or age. “Most studies of green consumers suggest they are predominantly high income, female, highly educated, or younger,” says Velázquez Martínez. “However, our results show that the differences were the same between low and high income, women and men, and younger and older people. We have shown that disclosing emissions transparently and making the consumer a part of the strategy can be a new opportunity for more consumer-driven logistics sustainability.” 

    The researchers are now developing similar models for business-to-business (B2B) e-commerce. “We found that B2B supply chain emissions are often high because many shipping companies require strict delivery windows,” says Velázquez Martínez.  

    The B2B models drill down to examine the Corporate Value Chain (Scope 3) emissions of suppliers. “Although some shipping companies are now asking their suppliers to review emissions, it is a challenge to create a transparent supply chain,” says Velázquez Martínez.  “Technological innovations have made it easier, starting with RFID [radio frequency identification], and then real-time GPS mapping and blockchain. But these technologies need to be more accessible and affordable, and we need more companies willing to use them.” 

    Some companies have been hesitant to dig too deeply into their supply chain, fearing they might uncover a scandal that might risk their reputation, says Velázquez Martínez. Other organizations are forced to look at the issue when nongovernmental organizations research sustainability issues such as social injustice in sweat shops and conflict mineral mines. 

    One challenge to building a transparent supply chain is that “in many companies, the sustainability teams are separate from the rest of the company,” says Velázquez Martínez. “Even if the CEOs receive information on sustainability issues, it often doesn’t filter down because the information does not belong to the planners or managers. We are pushing companies to not only account for sustainability factors in supply chain network design but also examine daily operations that affect sustainability. This is a big topic now: How can we translate sustainability information into something that everybody can understand and use?” 

    LIFT Lab lifts micro-retailers  

    In 2016, Velázquez Martínez launched the MIT GeneSys project to gain insights into micro and small enterprises (MSEs) in developing countries. The project released a GeneSys mobile app, which was used by more than 500 students throughout Latin America to collect data on more than 800 microfirms. In 2022, he launched the LIFT Lab, which focuses more specifically on studying and improving the supply chain for MSEs.  

    Worldwide, some 90 percent of companies have fewer than 10 employees. In Latin America and the Caribbean, companies with fewer than 50 employees represent 99 percent of all companies and 47 percent of employment. 

    Although MSEs represent much of the world’s economy, they are poorly understood, notes Velázquez Martínez. “Those tiny businesses are driving a lot of the economy and serve as important customers for the large companies working in developing countries. They range from small businesses down to people trying to get some money to eat by selling cakes or tacos through their windows.”  

    The MIT LIFT Lab researchers investigated whether MSE supply chain issues could help shed light on why many Latin American countries have been limited to marginal increases in gross domestic product. “Large companies from the developed world that are operating in Latin America, such as Unilever, Walmart, and Coca-Cola, have huge growth there, in some cases higher than they have in the developed world,” says Velázquez Martínez. “Yet, the countries are not developing as fast as we would expect.” 

    The LIFT Lab data showed that while the multinationals are thriving in Latin America, the local MSEs are decreasing in productivity. The study also found the trend has worsened with Covid-19.  

    The LIFT Lab’s first big project, which is sponsored by Mexican beverage and retail company FEMSA, is studying supply chains in Mexico. The study spans 200,000 micro-retailers and 300,000 consumers. In a collaboration with Tecnológico de Monterrey, hundreds of students are helping with a field study.  

    “We are looking at supply chain management and business capabilities and identifying the challenges to adoption of technology and digitalization,” says Velázquez Martínez. “We want to find the best ways for micro-firms to work with suppliers and consumers by identifying the consumers who access this market, as well as the products and services that can best help the micro-firms drive growth.” 

    Based on the earlier research by GeneSys, Velázquez Martínez has developed some hypotheses for potential improvements for micro-retailer supply chain, starting with payment terms. “We found that the micro-firms often get the worst purchasing deals. Owners without credit cards and with limited cash often buy in smaller amounts at much higher prices than retailers like Walmart. The big suppliers are squeezing them.” 

    While large retailers usually get 60 to 120 days to pay, micro-retailers “either pay at the moment of the transaction or in advance,” says Velázquez Martínez. “In a study of 500 micro-retailers in five countries in Latin America, we found the average payment time was minus seven days payment in advance. These terms reduce cash availability and often lead to bankruptcy.” 

    LIFT Lab is working with suppliers to persuade them to offer a minimum payment time of two weeks. “We can show the suppliers that the change in terms will let them move more product and increase sales,” says Velázquez Martínez. “Meanwhile, the micro-retailers gain higher profits and become more stable, even if they may pay a bit more.” 

    LIFT Lab is also looking at ways that micro-retailers can leverage smartphones for digitalization and planning. “Some of these companies are keeping records on napkins,” says Velázquez Martínez. “By using a cellphone, they can charge orders to suppliers and communicate with consumers. We are testing different dashboards for mobile apps to help with planning and financial performance. We are also recommending services the stores can provide, such as paying electricity or water bills. The idea is to build more capabilities and knowledge and increase business competencies for the supply chain that are tailored for micro-retailers.” 

    From a financial perspective, micro-retailers are not always the most efficient way to move products. Yet they also play an important role in building social cohesion within neighborhoods. By offering more services, the corner bodega can bring people together in ways that are impossible with e-commerce and big-box stores.  

    Whether the consumers are micro-firms buying from suppliers or e-commerce customers waiting for packages, “transparency is key to building a sustainable supply chain,” says Velázquez Martínez. “To change consumer habits, consumers need to be better educated on the impacts of their behaviors. With consumer-facing logistics, ‘The last shall be first, and the first last.’” More

  • in

    Manufacturing a cleaner future

    Manufacturing had a big summer. The CHIPS and Science Act, signed into law in August, represents a massive investment in U.S. domestic manufacturing. The act aims to drastically expand the U.S. semiconductor industry, strengthen supply chains, and invest in R&D for new technological breakthroughs. According to John Hart, professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT, the CHIPS Act is just the latest example of significantly increased interest in manufacturing in recent years.

    “You have multiple forces working together: reflections from the pandemic’s impact on supply chains, the geopolitical situation around the world, and the urgency and importance of sustainability,” says Hart. “This has now aligned incentives among government, industry, and the investment community to accelerate innovation in manufacturing and industrial technology.”

    Hand-in-hand with this increased focus on manufacturing is a need to prioritize sustainability.

    Roughly one-quarter of greenhouse gas emissions came from industry and manufacturing in 2020. Factories and plants can also deplete local water reserves and generate vast amounts of waste, some of which can be toxic.

    To address these issues and drive the transition to a low-carbon economy, new products and industrial processes must be developed alongside sustainable manufacturing technologies. Hart sees mechanical engineers as playing a crucial role in this transition.

    “Mechanical engineers can uniquely solve critical problems that require next-generation hardware technologies, and know how to bring their solutions to scale,” says Hart.

    Several fast-growing companies founded by faculty and alumni from MIT’s Department of Mechanical Engineering offer solutions for manufacturing’s environmental problem, paving the path for a more sustainable future.

    Gradiant: Cleantech water solutions

    Manufacturing requires water, and lots of it. A medium-sized semiconductor fabrication plant uses upward of 10 million gallons of water a day. In a world increasingly plagued by droughts, this dependence on water poses a major challenge.

    Gradiant offers a solution to this water problem. Co-founded by Anurag Bajpayee SM ’08, PhD ’12 and Prakash Govindan PhD ’12, the company is a pioneer in sustainable — or “cleantech” — water projects.

    As doctoral students in the Rohsenow Kendall Heat Transfer Laboratory, Bajpayee and Govindan shared a pragmatism and penchant for action. They both worked on desalination research — Bajpayee with Professor Gang Chen and Govindan with Professor John Lienhard.

    Inspired by a childhood spent during a severe drought in Chennai, India, Govindan developed for his PhD a humidification-dehumidification technology that mimicked natural rainfall cycles. It was with this piece of technology, which they named Carrier Gas Extraction (CGE), that the duo founded Gradiant in 2013.

    The key to CGE lies in a proprietary algorithm that accounts for variability in the quality and quantity in wastewater feed. At the heart of the algorithm is a nondimensional number, which Govindan proposes one day be called the “Lienhard Number,” after his doctoral advisor.

    “When the water quality varies in the system, our technology automatically sends a signal to motors within the plant to adjust the flow rates to bring back the nondimensional number to a value of one. Once it’s brought back to a value of one, you’re running in optimal condition,” explains Govindan, who serves as chief operating officer of Gradiant.

    This system can treat and clean the wastewater produced by a manufacturing plant for reuse, ultimately conserving millions of gallons of water each year.

    As the company has grown, the Gradiant team has added new technologies to their arsenal, including Selective Contaminant Extraction, a cost-efficient method that removes only specific contaminants, and a brine-concentration method called Counter-Flow Reverse Osmosis. They now offer a full technology stack of water and wastewater treatment solutions to clients in industries including pharmaceuticals, energy, mining, food and beverage, and the ever-growing semiconductor industry.

    “We are an end-to-end water solutions provider. We have a portfolio of proprietary technologies and will pick and choose from our ‘quiver’ depending on a customer’s needs,” says Bajpayee, who serves as CEO of Gradiant. “Customers look at us as their water partner. We can take care of their water problem end-to-end so they can focus on their core business.”

    Gradiant has seen explosive growth over the past decade. With 450 water and wastewater treatment plants built to date, they treat the equivalent of 5 million households’ worth of water each day. Recent acquisitions saw their total employees rise to above 500.

    The diversity of Gradiant’s solutions is reflected in their clients, who include Pfizer, AB InBev, and Coca-Cola. They also count semiconductor giants like Micron Technology, GlobalFoundries, Intel, and TSMC among their customers.

    “Over the last few years, we have really developed our capabilities and reputation serving semiconductor wastewater and semiconductor ultrapure water,” says Bajpayee.

    Semiconductor manufacturers require ultrapure water for fabrication. Unlike drinking water, which has a total dissolved solids range in the parts per million, water used to manufacture microchips has a range in the parts per billion or quadrillion.

    Currently, the average recycling rate at semiconductor fabrication plants — or fabs — in Singapore is only 43 percent. Using Gradiant’s technologies, these fabs can recycle 98-99 percent of the 10 million gallons of water they require daily. This reused water is pure enough to be put back into the manufacturing process.

    “What we’ve done is eliminated the discharge of this contaminated water and nearly eliminated the dependence of the semiconductor fab on the public water supply,” adds Bajpayee.

    With new regulations being introduced, pressure is increasing for fabs to improve their water use, making sustainability even more important to brand owners and their stakeholders.

    As the domestic semiconductor industry expands in light of the CHIPS and Science Act, Gradiant sees an opportunity to bring their semiconductor water treatment technologies to more factories in the United States.

    Via Separations: Efficient chemical filtration

    Like Bajpayee and Govindan, Shreya Dave ’09, SM ’12, PhD ’16 focused on desalination for her doctoral thesis. Under the guidance of her advisor Jeffrey Grossman, professor of materials science and engineering, Dave built a membrane that could enable more efficient and cheaper desalination.

    A thorough cost and market analysis brought Dave to the conclusion that the desalination membrane she developed would not make it to commercialization.

    “The current technologies are just really good at what they do. They’re low-cost, mass produced, and they worked. There was no room in the market for our technology,” says Dave.

    Shortly after defending her thesis, she read a commentary article in the journal Nature that changed everything. The article outlined a problem. Chemical separations that are central to many manufacturing processes require a huge amount of energy. Industry needed more efficient and cheaper membranes. Dave thought she might have a solution.

    After determining there was an economic opportunity, Dave, Grossman, and Brent Keller PhD ’16 founded Via Separations in 2017. Shortly thereafter, they were chosen as one of the first companies to receive funding from MIT’s venture firm, The Engine.

    Currently, industrial filtration is done by heating chemicals at very high temperatures to separate compounds. Dave likens it to making pasta by boiling all of the water off until it evaporates and all you are left with is the pasta noodles. In manufacturing, this method of chemical separation is extremely energy-intensive and inefficient.

    Via Separations has created the chemical equivalent of a “pasta strainer.” Rather than using heat to separate, their membranes “strain” chemical compounds. This method of chemical filtration uses 90 percent less energy than standard methods.

    While most membranes are made of polymers, Via Separations’ membranes are made with graphene oxide, which can withstand high temperatures and harsh conditions. The membrane is calibrated to the customer’s needs by altering the pore size and tuning the surface chemistry.

    Currently, Dave and her team are focusing on the pulp and paper industry as their beachhead market. They have developed a system that makes the recovery of a substance known as “black liquor” more energy efficient.

    “When tree becomes paper, only one-third of the biomass is used for the paper. Currently the most valuable use for the remaining two-thirds not needed for paper is to take it from a pretty dilute stream to a pretty concentrated stream using evaporators by boiling off the water,” says Dave.

    This black liquor is then burned. Most of the resulting energy is used to power the filtration process.

    “This closed-loop system accounts for an enormous amount of energy consumption in the U.S. We can make that process 84 percent more efficient by putting the ‘pasta strainer’ in front of the boiler,” adds Dave.

    VulcanForms: Additive manufacturing at industrial scale

    The first semester John Hart taught at MIT was a fruitful one. He taught a course on 3D printing, broadly known as additive manufacturing (AM). While it wasn’t his main research focus at the time, he found the topic fascinating. So did many of the students in the class, including Martin Feldmann MEng ’14.

    After graduating with his MEng in advanced manufacturing, Feldmann joined Hart’s research group full time. There, they bonded over their shared interest in AM. They saw an opportunity to innovate with an established metal AM technology, known as laser powder bed fusion, and came up with a concept to realize metal AM at an industrial scale.

    The pair co-founded VulcanForms in 2015.

    “We have developed a machine architecture for metal AM that can build parts with exceptional quality and productivity,” says Hart. “And, we have integrated our machines in a fully digital production system, combining AM, postprocessing, and precision machining.”

    Unlike other companies that sell 3D printers for others to produce parts, VulcanForms makes and sells parts for their customers using their fleet of industrial machines. VulcanForms has grown to nearly 400 employees. Last year, the team opened their first production factory, known as “VulcanOne,” in Devens, Massachusetts.

    The quality and precision with which VulcanForms produces parts is critical for products like medical implants, heat exchangers, and aircraft engines. Their machines can print layers of metal thinner than a human hair.

    “We’re producing components that are difficult, or in some cases impossible to manufacture otherwise,” adds Hart, who sits on the company’s board of directors.

    The technologies developed at VulcanForms may help lead to a more sustainable way to manufacture parts and products, both directly through the additive process and indirectly through more efficient, agile supply chains.

    One way that VulcanForms, and AM in general, promotes sustainability is through material savings.

    Many of the materials VulcanForms uses, such as titanium alloys, require a great deal of energy to produce. When titanium parts are 3D-printed, substantially less of the material is used than in a traditional machining process. This material efficiency is where Hart sees AM making a large impact in terms of energy savings.

    Hart also points out that AM can accelerate innovation in clean energy technologies, ranging from more efficient jet engines to future fusion reactors.

    “Companies seeking to de-risk and scale clean energy technologies require know-how and access to advanced manufacturing capability, and industrial additive manufacturing is transformative in this regard,” Hart adds.

    LiquiGlide: Reducing waste by removing friction

    There is an unlikely culprit when it comes to waste in manufacturing and consumer products: friction. Kripa Varanasi, professor of mechanical engineering, and the team at LiquiGlide are on a mission to create a frictionless future, and substantially reduce waste in the process.

    Founded in 2012 by Varanasi and alum David Smith SM ’11, LiquiGlide designs custom coatings that enable liquids to “glide” on surfaces. Every last drop of a product can be used, whether it’s being squeezed out of a tube of toothpaste or drained from a 500-liter tank at a manufacturing plant. Making containers frictionless substantially minimizes wasted product, and eliminates the need to clean a container before recycling or reusing.

    Since launching, the company has found great success in consumer products. Customer Colgate utilized LiquiGlide’s technologies in the design of the Colgate Elixir toothpaste bottle, which has been honored with several industry awards for design. In a collaboration with world- renowned designer Yves Béhar, LiquiGlide is applying their technology to beauty and personal care product packaging. Meanwhile, the U.S. Food and Drug Administration has granted them a Device Master Filing, opening up opportunities for the technology to be used in medical devices, drug delivery, and biopharmaceuticals.

    In 2016, the company developed a system to make manufacturing containers frictionless. Called CleanTanX, the technology is used to treat the surfaces of tanks, funnels, and hoppers, preventing materials from sticking to the side. The system can reduce material waste by up to 99 percent.

    “This could really change the game. It saves wasted product, reduces wastewater generated from cleaning tanks, and can help make the manufacturing process zero-waste,” says Varanasi, who serves as chair at LiquiGlide.

    LiquiGlide works by creating a coating made of a textured solid and liquid lubricant on the container surface. When applied to a container, the lubricant remains infused within the texture. Capillary forces stabilize and allow the liquid to spread on the surface, creating a continuously lubricated surface that any viscous material can slide right down. The company uses a thermodynamic algorithm to determine the combinations of safe solids and liquids depending on the product, whether it’s toothpaste or paint.

    The company has built a robotic spraying system that can treat large vats and tanks at manufacturing plants on site. In addition to saving companies millions of dollars in wasted product, LiquiGlide drastically reduces the amount of water needed to regularly clean these containers, which normally have product stuck to the sides.

    “Normally when you empty everything out of a tank, you still have residue that needs to be cleaned with a tremendous amount of water. In agrochemicals, for example, there are strict regulations about how to deal with the resulting wastewater, which is toxic. All of that can be eliminated with LiquiGlide,” says Varanasi.

    While the closure of many manufacturing facilities early in the pandemic slowed down the rollout of CleanTanX pilots at plants, things have picked up in recent months. As manufacturing ramps up both globally and domestically, Varanasi sees a growing need for LiquiGlide’s technologies, especially for liquids like semiconductor slurry.

    Companies like Gradiant, Via Separations, VulcanForms, and LiquiGlide demonstrate that an expansion in manufacturing industries does not need to come at a steep environmental cost. It is possible for manufacturing to be scaled up in a sustainable way.

    “Manufacturing has always been the backbone of what we do as mechanical engineers. At MIT in particular, there is always a drive to make manufacturing sustainable,” says Evelyn Wang, Ford Professor of Engineering and former head of the Department of Mechanical Engineering. “It’s amazing to see how startups that have an origin in our department are looking at every aspect of the manufacturing process and figuring out how to improve it for the health of our planet.”

    As legislation like the CHIPS and Science Act fuels growth in manufacturing, there will be an increased need for startups and companies that develop solutions to mitigate the environmental impact, bringing us closer to a more sustainable future. More

  • in

    Pursuing a practical approach to research

    Koroush Shirvan, the John Clark Hardwick Career Development Professor in the Department of Nuclear Science and Engineering (NSE), knows that the nuclear industry has traditionally been wary of innovations until they are shown to have proven utility. As a result, he has relentlessly focused on practical applications in his research, work that has netted him the 2022 Reactor Technology Award from the American Nuclear Society. “The award has usually recognized practical contributions to the field of reactor design and has not often gone to academia,” Shirvan says.

    One of these “practical contributions” is in the field of accident-tolerant fuels, a program launched by the U.S. Nuclear Regulatory Commission in the wake of the 2011 Fukushima Daiichi incident. The goal within this program, says Shirvan, is to develop new forms of nuclear fuels that can tolerate heat. His team, with students from over 16 countries, is working on numerous possibilities that range in composition and method of production.

    Another aspect of Shirvan’s research focuses on how radiation impacts heat transfer mechanisms in the reactor. The team found fuel corrosion to be the driving force. “[The research] informs how nuclear fuels perform in the reactor, from a practical point of view,” Shirvan says.

    Optimizing nuclear reactor design

    A summer internship when Shirvan was an undergraduate at the University of Florida at Gainesville seeded his drive to focus on practical applications in his studies. A nearby nuclear utility was losing millions because of crud accumulating on fuel rods. Over time, the company was solving the problem by using more fuel, before it had extracted all the life from earlier batches.

    Placement of fuel rods in nuclear reactors is a complex problem with many factors — the life of the fuel, location of hot spots — affecting outcomes. Nuclear reactors change their configuration of fuel rods every 18-24 months to optimize close to 15-20 constraints, leading to roughly 200-800 assemblies. The mind-boggling nature of the problem means that plants have to rely on experienced engineers.

    During his internship, Shirvan optimized the program used to place fuel rods in the reactor. He found that certain rods in assemblies were more prone to the crud deposits, and reworked their configurations, optimizing for these rods’ performance instead of adding assemblies.

    In recent years, Shirvan has applied a branch of artificial intelligence — reinforcement learning — to the configuration problem and created a software program used by the largest U.S. nuclear utility. “This program gives even a layperson the ability to reconfigure the fuels and the reactor without having expert knowledge,” Shirvan says.

    From advanced math to counting jelly beans

    Shirvan’s own expertise in nuclear science and engineering developed quite organically. He grew up in Tehran, Iran, and when he was 14 the family moved to Gainesville, where Shirvan’s aunt and family live. He remembers an awkward couple of years at the new high school where he was grouped in with newly arrived international students, and placed in entry-level classes. “I went from doing advanced mathematics in Iran to counting jelly beans,” he laughs.

    Shirvan applied to the University of Florida for his undergraduate studies since it made economic sense; the school gave full scholarships to Floridian students who received a certain minimum SAT score. Shirvan qualified. His uncle, who was a professor in the nuclear engineering department then, encouraged Shirvan to take classes in the department. Under his uncle’s mentorship, the courses Shirvan took, and his internship, cemented his love of the interdisciplinary approach that the field demanded.

    Having always known that he wanted to teach — he remembers finishing his math tests early in Tehran so he could earn the reward of being class monitor — Shirvan knew graduate school was next. His uncle encouraged him to apply to MIT and to the University of Michigan, home to reputable programs in the field. Shirvan chose MIT because “only at MIT was there a program on nuclear design. There were faculty dedicated to designing new reactors, looking at multiple disciplines, and putting all of that together.” He went on to pursue his master’s and doctoral studies at NSE under the supervision of Professor Mujid Kazimi, focusing on compact pressurized and boiling water reactor designs. When Kazimi passed away suddenly in 2015, Shirvan was a research scientist, and switched to tenure track to guide the professor’s team.

    Another project that Shirvan took in 2015: leadership of MIT’s course on nuclear reactor technology for utility executives. Offered only by the Institute, the program is an introduction to nuclear engineering and safety for personnel who might not have much background in the area. “It’s a great course because you get to see what the real problems are in the energy sector … like grid stability,” Shirvan says.

    A multipronged approach to savings

    Another very real problem nuclear utilities face is cost. Contrary to what one hears on the news, one of the biggest stumbling blocks to building new nuclear facilities in the United States is cost, which today can be up to three times that of renewables, Shirvan says. While many approaches such as advanced manufacturing have been tried, Shirvan believes that the solution to decrease expenditures lies in designing more compact reactors.

    His team has developed an open-source advanced nuclear cost tool and has focused on two different designs: a small water reactor using compact steam technology and a horizontal gas reactor. Compactness also means making fuels more efficient, as Shirvan’s work does, and in improving the heat exchange device. It’s all back to the basics and bringing “commercial viable arguments in with your research,” Shirvan explains.

    Shirvan is excited about the future of the U.S. nuclear industry, and that the 2022 Inflation Reduction Act grants the same subsidies to nuclear as it does for renewables. In this new level playing field, advanced nuclear still has a long way to go in terms of affordability, he admits. “It’s time to push forward with cost-effective design,” Shirvan says, “I look forward to supporting this by continuing to guide these efforts with research from my team.” More