More stories

  • in

    Toward sustainable decarbonization of aviation in Latin America

    According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.SAF emissions, costs, and sourcesUnder an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.RecommendationsThe researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”Financial support for this study was provided by LATAM Airlines and Airbus. More

  • in

    The multifaceted challenge of powering AI

    Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.Unexpected demand brings unexpected solutionsSeveral companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.An unprecedented surge in the demand for electricity“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.Finding clean electricity sourcesTo further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.Reducing electricity demandOther approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.What about the neighbors?A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.MIT’s contributionsAt MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system. More

  • in

    For clean ammonia, MIT engineers propose going underground

    Ammonia is the most widely produced chemical in the world today, used primarily as a source for nitrogen fertilizer. Its production is also a major source of greenhouse gas emissions — the highest in the whole chemical industry.Now, a team of researchers at MIT has developed an innovative way of making ammonia without the usual fossil-fuel-powered chemical plants that require high heat and pressure. Instead, they have found a way to use the Earth itself as a geochemical reactor, producing ammonia underground. The processes uses Earth’s naturally occurring heat and pressure, provided free of charge and free of emissions, as well as the reactivity of minerals already present in the ground.The trick the team devised is to inject water underground, into an area of iron-rich subsurface rock. The water carries with it a source of nitrogen and particles of a metal catalyst, allowing the water to react with the iron to generate clean hydrogen, which in turn reacts with the nitrogen to make ammonia. A second well is then used to pump that ammonia up to the surface.The process, which has been demonstrated in the lab but not yet in a natural setting, is described today in the journal Joule. The paper’s co-authors are MIT professors of materials science and engineering Iwnetim Abate and Ju Li, graduate student Yifan Gao, and five others at MIT.“When I first produced ammonia from rock in the lab, I was so excited,” Gao recalls. “I realized this represented an entirely new and never-reported approach to ammonia synthesis.’”The standard method for making ammonia is called the Haber-Bosch process, which was developed in Germany in the early 20th century to replace natural sources of nitrogen fertilizer such as mined deposits of bat guano, which were becoming depleted. But the Haber-Bosch process is very energy intensive: It requires temperatures of 400 degrees Celsius and pressures of 200 atmospheres, and this means it needs huge installations in order to be efficient. Some areas of the world, such as sub-Saharan Africa and Southeast Asia, have few or no such plants in operation.  As a result, the shortage or extremely high cost of fertilizer in these regions has limited their agricultural production.The Haber-Bosch process “is good. It works,” Abate says. “Without it, we wouldn’t have been able to feed 2 out of the total 8 billion people in the world right now, he says, referring to the portion of the world’s population whose food is grown with ammonia-based fertilizers. But because of the emissions and energy demands, a better process is needed, he says.Burning fuel to generate heat is responsible for about 20 percent of the greenhouse gases emitted from plants using the Haber-Bosch process. Making hydrogen accounts for the remaining 80 percent.  But ammonia, the molecule NH3, is made up only of nitrogen and hydrogen. There’s no carbon in the formula, so where do the carbon emissions come from? The standard way of producing the needed hydrogen is by processing methane gas with steam, breaking down the gas into pure hydrogen, which gets used, and carbon dioxide gas that gets released into the air.Other processes exist for making low- or no-emissions hydrogen, such as by using solar or wind-generated electricity to split water into oxygen and hydrogen, but that process can be expensive. That’s why Abate and his team worked on developing a system to produce what they call geological hydrogen. Some places in the world, including some in Africa, have been found to naturally generate hydrogen underground through chemical reactions between water and iron-rich rocks. These pockets of naturally occurring hydrogen can be mined, just like natural methane reservoirs, but the extent and locations of such deposits are still relatively unexplored.Abate realized this process could be created or enhanced by pumping water, laced with copper and nickel catalyst particles to speed up the process, into the ground in places where such iron-rich rocks were already present. “We can use the Earth as a factory to produce clean flows of hydrogen,” he says.He recalls thinking about the problem of the emissions from hydrogen production for ammonia: “The ‘aha!’ moment for me was thinking, how about we link this process of geological hydrogen production with the process of making Haber-Bosch ammonia?”That would solve the biggest problem of the underground hydrogen production process, which is how to capture and store the gas once it’s produced. Hydrogen is a very tiny molecule — the smallest of them all — and hard to contain. But by implementing the entire Haber-Bosch process underground, the only material that would need to be sent to the surface would be the ammonia itself, which is easy to capture, store, and transport.The only extra ingredient needed to complete the process was the addition of a source of nitrogen, such as nitrate or nitrogen gas, into the water-catalyst mixture being injected into the ground. Then, as the hydrogen gets released from water molecules after interacting with the iron-rich rocks, it can immediately bond with the nitrogen atoms also carried in the water, with the deep underground environment providing the high temperatures and pressures required by the Haber-Bosch process. A second well near the injection well then pumps the ammonia out and into tanks on the surface.“We call this geological ammonia,” Abate says, “because we are using subsurface temperature, pressure, chemistry, and geologically existing rocks to produce ammonia directly.”Whereas transporting hydrogen requires expensive equipment to cool and liquefy it, and virtually no pipelines exist for its transport (except near oil refinery sites), transporting ammonia is easier and cheaper. It’s about one-sixth the cost of transporting hydrogen, and there are already more than 5,000 miles of ammonia pipelines and 10,000 terminals in place in the U.S. alone. What’s more, Abate explains, ammonia, unlike hydrogen, already has a substantial commercial market in place, with production volume projected to grow by two to three times by 2050, as it is used not only for fertilizer but also as feedstock for a wide variety of chemical processes.For example, ammonia can be burned directly in gas turbines, engines, and industrial furnaces, providing a carbon-free alternative to fossil fuels. It is being explored for maritime shipping and aviation as an alternative fuel, and as a possible space propellant.Another upside to geological ammonia is that untreated wastewater, including agricultural runoff, which tends to be rich in nitrogen already, could serve as the water source and be treated in the process. “We can tackle the problem of treating wastewater, while also making something of value out of this waste,” Abate says.Gao adds that this process “involves no direct carbon emissions, presenting a potential pathway to reduce global CO2 emissions by up to 1 percent.” To arrive at this point, he says, the team “overcame numerous challenges and learned from many failed attempts. For example, we tested a wide range of conditions and catalysts before identifying the most effective one.”The project was seed-funded under a flagship project of MIT’s Climate Grand Challenges program, the Center for the Electrification and Decarbonization of Industry. Professor Yet-Ming Chiang, co-director of the center, says “I don’t think there’s been any previous example of deliberately using the Earth as a chemical reactor. That’s one of the key novel points of this approach.”  Chiang emphasizes that even though it is a geological process, it happens very fast, not on geological timescales. “The reaction is fundamentally over in a matter of hours,” he says. “The reaction is so fast that this answers one of the key questions: Do you have to wait for geological times? And the answer is absolutely no.”Professor Elsa Olivetti, a mission director of the newly established Climate Project at MIT, says, “The creative thinking by this team is invaluable to MIT’s ability to have impact at scale. Coupling these exciting results with, for example, advanced understanding of the geology surrounding hydrogen accumulations represent the whole-of-Institute efforts the Climate Project aims to support.”“This is a significant breakthrough for the future of sustainable development,” says Geoffrey Ellis, a geologist at the U.S. Geological Survey, who was not associated with this work. He adds, “While there is clearly more work that needs to be done to validate this at the pilot stage and to get this to the commercial scale, the concept that has been demonstrated is truly transformative.  The approach of engineering a system to optimize the natural process of nitrate reduction by Fe2+ is ingenious and will likely lead to further innovations along these lines.”The initial work on the process has been done in the laboratory, so the next step will be to prove the process using a real underground site. “We think that kind of experiment can be done within the next one to two years,” Abate says. This could open doors to using a similar approach for other chemical production processes, he adds.The team has applied for a patent and aims to work towards bringing the process to market.“Moving forward,” Gao says, “our focus will be on optimizing the process conditions and scaling up tests, with the goal of enabling practical applications for geological ammonia in the near future.”The research team also included Ming Lei, Bachu Sravan Kumar, Hugh Smith, Seok Hee Han, and Lokesh Sangabattula, all at MIT. Additional funding was provided by the National Science Foundation and was carried out, in part, through the use of MIT.nano facilities. More

  • in

    Explained: Generative AI’s environmental impact

    In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.Demanding data centersThe electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.Increasing impacts from inferenceOnce a generative AI model is trained, the energy demands don’t disappear.Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.The computing hardware inside data centers brings its own, less direct environmental impacts.While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says. More

  • in

    Designing tiny filters to solve big problems

    For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.Hooked on researchAs a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”Solving global problemsWhile interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.” More

  • in

    Helping students bring about decarbonization, from benchtop to global energy marketplace

    MIT students are adept at producing research and innovations at the cutting edge of their fields. But addressing a problem as large as climate change requires understanding the world’s energy landscape, as well as the ways energy technologies evolve over time.Since 2010, the course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation) has equipped students with the skills they need to evaluate the various energy decarbonization pathways available to the world. The work is designed to help them maximize their impact on the world’s emissions by making better decisions along their respective career paths.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” says Professor Jessika Trancik, who started the course to help fill a gap in knowledge about the ways technologies evolve and scale over time.Since its inception in 2010, the course has attracted graduate students from across MIT’s five schools. The course has also recently opened to undergraduate students and been adapted to an online course for professionals.Class sessions alternate between lectures and student discussions that lead up to semester-long projects in which groups of students explore specific strategies and technologies for reducing global emissions. This year’s projects span several topics, including how quickly transmission infrastructure is expanding, the relationship between carbon emissions and human development, and how to decarbonize the production of key chemicals.The curriculum is designed to help students identify the most promising ways to mitigate climate change whether they plan to be scientists, engineers, policymakers, investors, urban planners, or just more informed citizens.“We’re coming at this issue from both sides,” explains Trancik, who is part of MIT’s Institute for Data, Systems, and Society. “Engineers are used to designing a technology to work as well as possible here and now, but not always thinking over a longer time horizon about a technology evolving and succeeding in the global marketplace. On the flip side, for students at the macro level, often studies in policy and economics of technological change don’t fully account for the physical and engineering constraints of rates of improvement. But all of that information allows you to make better decisions.”Bridging the gapAs a young researcher working on low-carbon polymers and electrode materials for solar cells, Trancik always wondered how the materials she worked on would scale in the real world. They might achieve promising performance benchmarks in the lab, but would they actually make a difference in mitigating climate change? Later, she began focusing increasingly on developing methods for predicting how technologies might evolve.“I’ve always been interested in both the macro and the micro, or even nano, scales,” Trancik says. “I wanted to know how to bridge these new technologies we’re working on with the big picture of where we want to go.”Trancik’ described her technology-grounded approach to decarbonization in a paper that formed the basis for IDS.065. In the paper, she presented a way to evaluate energy technologies against climate-change mitigation goals while focusing on the technology’s evolution.“That was a departure from previous approaches, which said, given these technologies with fixed characteristics and assumptions about their rates of change, how do I choose the best combination?” Trancik explains. “Instead we asked: Given a goal, how do we develop the best technologies to meet that goal? That inverts the problem in a way that’s useful to engineers developing these technologies, but also to policymakers and investors that want to use the evolution of technologies as a tool for achieving their objectives.”This past semester, the class took place every Tuesday and Thursday in a classroom on the first floor of the Stata Center. Students regularly led discussions where they reflected on the week’s readings and offered their own insights.“Students always share their takeaways and get to ask open questions of the class,” says Megan Herrington, a PhD candidate in the Department of Chemical Engineering. “It helps you understand the readings on a deeper level because people with different backgrounds get to share their perspectives on the same questions and problems. Everybody comes to class with their own lens, and the class is set up to highlight those differences.”The semester begins with an overview of climate science, the origins of emissions reductions goals, and technology’s role in achieving those goals. Students then learn how to evaluate technologies against decarbonization goals.But technologies aren’t static, and neither is the world. Later lessons help students account for the change of technologies over time, identifying the mechanisms for that change and even forecasting rates of change.Students also learn about the role of government policy. This year, Trancik shared her experience traveling to the COP29 United Nations Climate Change Conference.“It’s not just about technology,” Trancik says. “It’s also about the behaviors that we engage in and the choices we make. But technology plays a major role in determining what set of choices we can make.”From the classroom to the worldStudents in the class say it has given them a new perspective on climate change mitigation.“I have really enjoyed getting to see beyond the research people are doing at the benchtop,” says Herrington. “It’s interesting to see how certain materials or technologies that aren’t scalable yet may fit into a larger transformation in energy delivery and consumption. It’s also been interesting to pull back the curtain on energy systems analysis to understand where the metrics we cite in energy-related research originate from, and to anticipate trajectories of emerging technologies.”Onur Talu, a first-year master’s student in the Technology and Policy Program, says the class has made him more hopeful.“I came into this fairly pessimistic about the climate,” says Talu, who has worked for clean technology startups in the past. “This class has taught me different ways to look at the problem of climate change mitigation and developing renewable technologies. It’s also helped put into perspective how much we’ve accomplished so far.”Several student projects from the class over the years have been developed into papers published in peer-reviewed journals. They have also been turned into tools, like carboncounter.com, which plots the emissions and costs of cars and has been featured in The New York Times.Former class students have also launched startups; Joel Jean SM ’13, PhD ’17, for example, started Swift Solar. Others have drawn on the course material to develop impactful careers in government and academia, such as Patrick Brown PhD ’16 at the National Renewable Energy Laboratory and Leah Stokes SM ’15, PhD ’15 at the University of California at Santa Barbara.Overall, students say the course helps them take a more informed approach to applying their skills toward addressing climate change.“It’s not enough to just know how bad climate change could be,” says Yu Tong, a first-year master’s student in civil and environmental engineering. “It’s also important to understand how technology can work to mitigate climate change from both a technological and market perspective. It’s about employing technology to solve these issues rather than just working in a vacuum.” More

  • in

    In a unique research collaboration, students make the case for less e-waste

    Brought together as part of the Social and Ethical Responsibilities of Computing (SERC) initiative within the MIT Schwarzman College of Computing, a community of students known as SERC Scholars is collaborating to examine the most urgent problems humans face in the digital landscape.Each semester, students from all levels from across MIT are invited to join a different topical working group led by a SERC postdoctoral associate. Each group delves into a specific issue — such as surveillance or data ownership — culminating in a final project presented at the end of the term.Typically, students complete the program with hands-on experience conducting research in a new cross-disciplinary field. However, one group of undergraduate and graduate students recently had the unique opportunity to enhance their resume by becoming published authors of a case study about the environmental and climate justice implications of the electronics hardware life cycle.Although it’s not uncommon for graduate students to co-author case studies, it’s unusual for undergraduates to earn this opportunity — and for their audience to be other undergraduates around the world.“Our team was insanely interdisciplinary,” says Anastasia Dunca, a junior studying computer science and one of the co-authors. “I joined the SERC Scholars Program because I liked the idea of being part of a cohort from across MIT working on a project that utilized all of our skillsets. It also helps [undergraduates] learn the ins and outs of computing ethics research.”Case study co-author Jasmin Liu, an MBA student in the MIT Sloan School of Management, sees the program as a platform to learn about the intersection of technology, society, and ethics: “I met team members spanning computer science, urban planning, to art/culture/technology. I was excited to work with a diverse team because I know complex problems must be approached with many different perspectives. Combining my background in humanities and business with the expertise of others allowed us to be more innovative and comprehensive.”Christopher Rabe, a former SERC postdoc who facilitated the group, says, “I let the students take the lead on identifying the topic and conducting the research.” His goal for the group was to challenge students across disciplines to develop a working definition of climate justice.From mining to e-wasteThe SERC Scholars’ case study, “From Mining to E-waste: The Environmental and Climate Justice Implications of the Electronics Hardware Life Cycle,” was published by the MIT Case Studies in Social and Ethical Responsibilities of Computing.The ongoing case studies series, which releases new issues twice a year on an open-source platform, is enabling undergraduate instructors worldwide to incorporate research-based education materials on computing ethics into their existing class syllabi.This particular case study broke down the electronics life cycle from mining to manufacturing, usage, and disposal. It offered an in-depth look at how this cycle promotes inequity in the Global South. Mining for the average of 60 minerals that power everyday devices lead to illegal deforestation, compromising air quality in the Amazon, and triggering armed conflict in Congo. Manufacturing leads to proven health risks for both formal and informal workers, some of whom are child laborers.Life cycle assessment and circular economy are proposed as mechanisms for analyzing environmental and climate justice issues in the electronics life cycle. Rather than posing solutions, the case study offers readers entry points for further discussion and for assessing their own individual responsibility as producers of e-waste.Crufting and crafting a case studyDunca joined Rabe’s working group, intrigued by the invitation to conduct a rigorous literature review examining issues like data center resource and energy use, manufacturing waste, ethical issues with AI, and climate change. Rabe quickly realized that a common thread among all participants was an interest in understanding and reducing e-waste and its impact on the environment.“I came in with the idea of us co-authoring a case study,” Rabe said. However, the writing-intensive process was initially daunting to those students who were used to conducting applied research. Once Rabe created sub-groups with discrete tasks, the steps for researching, writing, and iterating a case study became more approachable.For Ellie Bultena, an undergraduate student studying linguistics and philosophy and a contributor to the study, that meant conducting field research on the loading dock of MIT’s Stata Center, where students and faculty go “crufting” through piles of clunky printers, broken computers, and used lab equipment discarded by the Institute’s labs, departments, and individual users.Although not a formally sanctioned activity on-campus, “crufting” is the act of gleaning usable parts from these junk piles to be repurposed into new equipment or art. Bultena’s respondents, who opted to be anonymous, said that MIT could do better when it comes to the amount of e-waste generated and suggested that formal strategies could be implemented to encourage community members to repair equipment more easily or recycle more formally.Rabe, now an education program director at the MIT Environmental Solutions Initiative, is hopeful that through the Zero-Carbon Campus Initiative, which commits MIT to eliminating all direct emissions by 2050, MIT will ultimately become a model for other higher education institutions.Although the group lacked the time and resources to travel to communities in the Global South that they profiled in their case study, members leaned into exhaustive secondary research, collecting data on how some countries are irresponsibly dumping e-waste. In contrast, others have developed alternative solutions that can be duplicated elsewhere and scaled.“We source materials, manufacture them, and then throw them away,” Lelia Hampton says. A PhD candidate in electrical engineering and computer science and another co-author, Hampton jumped at the opportunity to serve in a writing role, bringing together the sub-groups research findings. “I’d never written a case study, and it was exciting. Now I want to write 10 more.”The content directly informed Hampton’s dissertation research, which “looks at applying machine learning to climate justice issues such as urban heat islands.” She said that writing a case study that is accessible to general audiences upskilled her for the non-profit organization she’s determined to start. “It’s going to provide communities with free resources and data needed to understand how they are impacted by climate change and begin to advocate against injustice,” Hampton explains.Dunca, Liu, Rabe, Bultena, and Hampton are joined on the case study by fellow authors Mrinalini Singha, a graduate student in the Art, Culture, and Technology program; Sungmoon Lim, a graduate student in urban studies and planning and EECS; Lauren Higgins, an undergraduate majoring in political science; and Madeline Schlegal, a Northeastern University co-op student.Taking the case study to classrooms around the worldAlthough PhD candidates have contributed to previous case studies in the series, this publication is the first to be co-authored with MIT undergraduates. Like any other peer-reviewed journal, before publication, the SERC Scholars’ case study was anonymously reviewed by senior scholars drawn from various fields.The series editor, David Kaiser, also served as one of SERC’s inaugural associate deans and helped shape the program. “The case studies, by design, are short, easy to read, and don’t take up lots of time,” Kaiser explained. “They are gateways for students to explore, and instructors can cover a topic that has likely already been on their mind.” This semester, Kaiser, the Germeshausen Professor of the History of Science and a professor of physics, is teaching STS.004 (Intersections: Science, Technology, and the World), an undergraduate introduction to the field of science, technology, and society. The last month of the semester has been dedicated wholly to SERC case studies, one of which is: “From Mining to E-Waste.”Hampton was visibly moved to hear that the case study is being used at MIT but also by some of the 250,000 visitors to the SERC platform, many of whom are based in the Global South and directly impacted by the issues she and her cohort researched. “Many students are focused on climate, whether through computer science, data science, or mechanical engineering. I hope that this case study educates them on environmental and climate aspects of e-waste and computing.” More

  • in

    Enabling a circular economy in the built environment

    The amount of waste generated by the construction sector underscores an urgent need for embracing circularity — a sustainable model that aims to minimize waste and maximize material efficiency through recovery and reuse — in the built environment: 600 million tons of construction and demolition waste was produced in the United States alone in 2018, with 820 million tons reported in the European Union, and an excess of 2 billion tons annually in China.This significant resource loss embedded in our current industrial ecosystem marks a linear economy that operates on a “take-make-dispose” model of construction; in contrast, the “make-use-reuse” approach of a circular economy offers an important opportunity to reduce environmental impacts.A team of MIT researchers has begun to assess what may be needed to spur widespread circular transition within the built environment in a new open-access study that aims to understand stakeholders’ current perceptions of circularity and quantify their willingness to pay.“This paper acts as an initial endeavor into understanding what the industry may be motivated by, and how integration of stakeholder motivations could lead to greater adoption,” says lead author Juliana Berglund-Brown, PhD student in the Department of Architecture at MIT.Considering stakeholders’ perceptionsThree different stakeholder groups from North America, Europe, and Asia — material suppliers, design and construction teams, and real estate developers — were surveyed by the research team that also comprises Akrisht Pandey ’23; Fabio Duarte, associate director of the MIT Senseable City Lab; Raquel Ganitsky, fellow in the Sustainable Real Estate Development Action Program; Randolph Kirchain, co-director of MIT Concrete Sustainability Hub; and Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at Department of Urban Studies and Planning.Despite growing awareness of reuse practice among construction industry stakeholders, circular practices have yet to be implemented at scale — attributable to many factors that influence the intersection of construction needs with government regulations and the economic interests of real estate developers.The study notes that perceived barriers to circular adoption differ based on industry role, with lack of both client interest and standardized structural assessment methods identified as the primary concern of design and construction teams, while the largest deterrents for material suppliers are logistics complexity, and supply uncertainty. Real estate developers, on the other hand, are chiefly concerned with higher costs and structural assessment. Yet encouragingly, respondents expressed willingness to absorb higher costs, with developers indicating readiness to pay an average of 9.6 percent higher construction costs for a minimum 52.9 percent reduction in embodied carbon — and all stakeholders highly favor the potential of incentives like tax exemptions to aid with cost premiums.Next steps to encourage circularityThe findings highlight the need for further conversation between design teams and developers, as well as for additional exploration into potential solutions to practical challenges. “The thing about circularity is that there is opportunity for a lot of value creation, and subsequently profit,” says Berglund-Brown. “If people are motivated by cost, let’s provide a cost incentive, or establish strategies that have one.”When it comes to motivating reasons to adopt circularity practices, the study also found trends emerging by industry role. Future net-zero goals influence developers as well as design and construction teams, with government regulation the third-most frequently named reason across all respondent types.“The construction industry needs a market driver to embrace circularity,” says Berglund-Brown, “Be it carrots or sticks, stakeholders require incentives for adoption.”The effect of policy to motivate change cannot be understated, with major strides being made in low operational carbon building design after policy restricting emissions was introduced, such as Local Law 97 in New York City and the Building Emissions Reduction and Disclosure Ordinance in Boston. These pieces of policy, and their results, can serve as models for embodied carbon reduction policy elsewhere.Berglund-Brown suggests that municipalities might initiate ordinances requiring buildings to be deconstructed, which would allow components to be reused, curbing demolition methods that result in waste rather than salvage. Top-down ordinances could be one way to trigger a supply chain shift toward reprocessing building materials that are typically deemed “end-of-life.”The study also identifies other challenges to the implementation of circularity at scale, including risk associated with how to reuse materials in new buildings, and disrupting status quo design practices.“Understanding the best way to motivate transition despite uncertainty is where our work comes in,” says Berglund-Brown. “Beyond that, researchers can continue to do a lot to alleviate risk — like developing standards for reuse.”Innovations that challenge the status quoDisrupting the status quo is not unusual for MIT researchers; other visionary work in construction circularity pioneered at MIT includes “a smart kit of parts” called Pixelframe. This system for modular concrete reuse allows building elements to be disassembled and rebuilt several times, aiding deconstruction and reuse while maintaining material efficiency and versatility.Developed by MIT Climate and Sustainability Consortium Associate Director Caitlin Mueller’s research team, Pixelframe is designed to accommodate a wide range of applications from housing to warehouses, with each piece of interlocking precast concrete modules, called Pixels, assigned a material passport to enable tracking through its many life cycles.Mueller’s work demonstrates that circularity can work technically and logistically at the scale of the built environment — by designing specifically for disassembly, configuration, versatility, and upfront carbon and cost efficiency.“This can be built today. This is building code-compliant today,” said Mueller of Pixelframe in a keynote speech at the recent MCSC Annual Symposium, which saw industry representatives and members of the MIT community coming together to discuss scalable solutions to climate and sustainability problems. “We currently have the potential for high-impact carbon reduction as a compelling alternative to the business-as-usual construction methods we are used to.”Pixelframe was recently awarded a grant by the Massachusetts Clean Energy Center (MassCEC) to pursue commercialization, an important next step toward integrating innovations like this into a circular economy in practice. “It’s MassCEC’s job to make sure that these climate leaders have the resources they need to turn their technologies into successful businesses that make a difference around the world,” said MassCEC CEO Emily Reichart, in a press release.Additional support for circular innovation has emerged thanks to a historic piece of climate legislation from the Biden administration. The Environmental Protection Agency recently awarded a federal grant on the topic of advancing steel reuse to Berglund-Brown — whose PhD thesis focuses on scaling the reuse of structural heavy-section steel — and John Ochsendorf, the Class of 1942 Professor of Civil and Environmental Engineering and Architecture at MIT.“There is a lot of exciting upcoming work on this topic,” says Berglund-Brown. “To any practitioners reading this who are interested in getting involved — please reach out.”The study is supported in part by the MIT Climate and Sustainability Consortium. More