More stories

  • in

    MIT Climate and Energy Ventures class spins out entrepreneurs — and successful companies

    In 2014, a team of MIT students in course 15.366 (Climate and Energy Ventures) developed a plan to commercialize MIT research on how to move information between chips with light instead of electricity, reducing energy usage.After completing the class, which challenges students to identify early customers and pitch their business plan to investors, the team went on to win both grand prizes at the MIT Clean Energy Prize. Today the company, Ayar Labs, has raised a total of $370 million from a group including chip leaders AMD, Intel, and NVIDIA, to scale the manufacturing of its optical chip interconnects.Ayar Labs is one of many companies whose roots can be traced back to 15.366. In fact, more than 150 companies have been founded by alumni of the class since its founding in 2007.In the class, student teams select a technology or idea and determine the best path for its commercialization. The semester-long project, which is accompanied by lectures and mentoring, equips students with real-world experience in launching a business.“The goal is to educate entrepreneurs on how to start companies in the climate and energy space,” says Senior Lecturer Tod Hynes, who co-founded the course and has been teaching since 2008. “We do that through hands-on experience. We require students to engage with customers, talk to potential suppliers, partners, investors, and to practice their pitches to learn from that feedback.”The class attracts hundreds of student applications each year. As one of the catalysts for MIT spinoffs, it is also one reason a 2015 report found that MIT alumni-founded companies had generated roughly $1.9 trillion in annual revenues. If MIT were a country, that figure that would make it the 10th largest economy in the world, according to the report.“’Mens et manus’ (‘mind and hand’) is MIT’s motto, and the hands-on experience we try to provide in this class is hard to beat,” Hynes says. “When you actually go through the process of commercialization in the real world, you learn more and you’re in a better spot. That experiential learning approach really aligns with MIT’s approach.”Simulating a startupThe course was started by Bill Aulet, a professor of the practice at the MIT Sloan School of Management and the managing director of the Martin Trust Center for MIT Entrepreneurship. After serving as an advisor the first year and helping Aulet launch the class, Hynes began teaching the class with Aulet in the fall of 2008. The pair also launched the Climate and Energy Prize around the same time, which continues today and recently received over 150 applications from teams from around the world.A core feature of the class is connecting students in different academic fields. Each year, organizers aim to enroll students with backgrounds in science, engineering, business, and policy.“The class is meant to be accessible to anybody at MIT,” Hynes says, noting the course has also since opened to students from Harvard University. “We’re trying to pull across disciplines.”The class quickly grew in popularity around campus. Over the last few years, the course has had about 150 students apply for 50 spots.“I mentioned Climate and Energy Ventures in my application to MIT,” says Chris Johnson, a second-year graduate student in the Leaders for Global Operations (LGO) Program. “Coming into MIT, I was very interested in sustainability, and energy in particular, and also in startups. I had heard great things about the class, and I waited until my last semester to apply.”The course’s organizers select mostly graduate students, whom they prefer to be in the final year of their program so they can more easily continue working on the venture after the class is finished.“Whether or not students stick with the project from the class, it’s a great experience that will serve them in their careers,” says Jennifer Turliuk, the practice leader for climate and energy artificial intelligence at the Martin Trust Center for Entrepreneurship, who helped teach the class this fall.Hynes describes the course as a venture-building simulation. Before it begins, organizers select up to 30 technologies and ideas that are in the right stage for commercialization. Students can also come into the class with ideas or technologies they want to work on.After a few weeks of introductions and lectures, students form into multidisciplinary teams of about five and begin going through each of the 24 steps of building a startup described in Aulet’s book “Disciplined Entrepreneurship,” which includes things like engaging with potential early customers, quantifying a value proposition, and establishing a business model. Everything builds toward a one-hour final presentation that’s designed to simulate a pitch to investors or government officials.“It’s a lot of work, and because it’s a team-based project, your grade is highly dependent on your team,” Hynes says. “You also get graded by your team; that’s about 10 percent of your grade. We try to encourage people to be proactive and supportive teammates.”Students say the process is fast-paced but rewarding.“It’s definitely demanding,” says Sofie Netteberg, a graduate student who is also in the LGO program at MIT. “Depending on where you’re at with your technology, you can be moving very quickly. That’s the stage that I was in, which I found really engaging. We basically just had a lab technology, and it was like, ‘What do we do next?’ You also get a ton of support from the professors.”From the classroom to the worldThis fall’s final presentations took place at the headquarters of the MIT-affiliated venture firm The Engine in front of an audience of professors, investors, members of foundations supporting entrepreneurship, and more.“We got to hear feedback from people who would be the real next step for the technology if the startup gets up and running,” said Johnson, whose team was commercializing a method for storing energy in concrete. “That was really valuable. We know that these are not only people we might see in the next month or the next funding rounds, but they’re also exactly the type of people that are going to give us the questions we should be thinking about. It was clarifying.”Throughout the semester, students treated the project like a real venture they’d be working on well beyond the length of the class.“No one’s really thinking about this class for the grade; it’s about the learning,” says Netteberg, whose team was encouraged to keep working on their electrolyzer technology designed to more efficiently produce green hydrogen. “We’re not stressed about getting an A. If we want to keep working on this, we want real feedback: What do you think we did well? What do we need to keep working on?”Hynes says several investors expressed interest in supporting the businesses coming out of the class. Moving forward, he hopes students embrace the test-bed environment his team has created for them and try bold new things.“People have been very pragmatic over the years, which is good, but also potentially limiting,” Hynes says. “This is also an opportunity to do something that’s a little further out there — something that has really big potential impact if it comes together. This is the time where students get to experiment, so why not try something big?” More

  • in

    Building resiliency

    Several years ago, the residents of a manufactured-home neighborhood in southeast suburban Houston, not far from the Buffalo Bayou, took a major step in dealing with climate problems: They bought the land under their homes. Then they installed better drainage and developed strategies to share expertise and tools for home repairs. The result? The neighborhood made it through Hurricane Harvey in 2017 and a winter freeze in 2021 without major damage.The neighborhood is part of a U.S. movement toward the Resident Owned Community (ROC) model for manufactured home parks. Many people in manufactured homes — mobile homes — do not own the land under them. But if the residents of a manufactured-home park can form an ROC, they can take action to adapt to climate risks — and ease the threat of eviction. With an ROC, manufactured-home residents can be there to stay.That speaks to a larger issue: In cities, lower-income residents are often especially vulnerable to natural hazards, such as flooding, extreme heat, and wildfire. But efforts aimed at helping cities as a whole withstand these disasters can lead to interventions that displace already-disadvantaged residents — by turning a low-lying neighborhood into a storm buffer, for instance.“The global climate crisis has very differential effects on cities, and neighborhoods within cities,” says Lawrence Vale, a professor of urban studies at MIT and co-author of a new book on the subject, “The Equitably Resilient City,” published by the MIT Press and co-authored with Zachary B. Lamb PhD ’18, an assistant professor at the University of California at Berkeley.In the book, the scholars delve into 12 case studies from around the globe which, they believe, have it both ways: Low- and middle-income communities have driven climate progress through tangible built projects, while also keeping people from being displaced, and indeed helping them participate in local governance and neighborhood decision-making.“We can either dive into despair about climate issues, or think they’re solvable and ask what it takes to succeed in a more equitable way,” says Vale, who is the Ford Professor of Urban Design and Planning at MIT. “This book is asking how people look at problems more holistically — to show how environmental impacts are integrated with their livelihoods, with feeling they can have security from displacement, and feeling they’re not going to be displaced, with being empowered to share in the governance where they live.”As Lamb notes, “Pursuing equitable urban climate adaptation requires both changes in the physical built environment of cities and innovations in institutions and governance practices to address deep-seated causes of inequality.”Twelve projects, four elementsResearch for “The Equitably Resilient City” began with exploration of about 200 potential cases, and ultimately focused on 12 projects from around the globe, including the U.S., Brazil, Thailand, and France. Vale and Lamb, coordinating with locally-based research teams, visited these diverse sites and conducted interviews in nine languages.All 12 projects work on multiple levels at once: They are steps toward environmental progress that also help local communities in civic and economic terms. The book uses the acronym LEGS (“livelihood, environment, governance, and security”) to encapsulate this need to make equitable progress on four different fronts.“Doing one of those things well is worth recognition, and doing all of them well is exciting,” Vale says. “It’s important to understand not just what these communities did, but how they did it and whose views were involved. These 12 cases are not a random sample. The book looks for people who are partially succeeding at difficult things in difficult circumstances.”One case study is set in São Paolo, Brazil, where low-income residents of a hilly favela benefitted from new housing in the area on undeveloped land that is less prone to slides. In San Juan, Puerto Rico, residents of low-lying neighborhoods abutting a water channel formed a durable set of community groups to create a fairer solution to flooding: Although the channel needed to be re-widened, the local coalition insisted on limiting displacement, supporting local livelihoods and improving environmental conditions and public space.“There is a backlash to older practices,” Vale says, referring to the large-scale urban planning and infrastructure projects of the mid-20th century, which often ignored community input. “People saw what happened during the urban renewal era and said, ‘You’re not going to do that to us again.’”Indeed, one through-line in “The Equitably Resilient City” is that cities, like all places, can be contested political terrain. Often, solid solutions emerge when local groups organize, advocate for new solutions, and eventually gain enough traction to enact them.“Every one of our examples and cases has probably 15 or 20 years of activity behind it, as well as engagements with a much deeper history,” Vale says. “They’re all rooted in a very often troubled [political] context. And yet these are places that have made progress possible.”Think locally, adapt anywhereAnother motif of “The Equitably Resilient City” is that local progress matters greatly, for a few reasons — including the value of having communities develop projects that meet their own needs, based on their input. Vale and Lamb are interested in projects even if they are very small-scale, and devote one chapter of the book to the Paris OASIS program, which has developed a series of cleverly designed, heavily tree-dotted school playgrounds across Paris. These projects provide environmental education opportunities and help mitigate flooding and urban heat while adding CO2-harnessing greenery to the cityscape.An individual park, by itself, can only do so much, but the concept behind it can be adopted by anyone.“This book is mostly centered on local projects rather than national schemes,” Vale says. “The hope is they serve as an inspiration for people to adapt to their own situations.”After all, the urban geography and governance of places such as Paris or São Paulo will differ widely. But efforts to make improvements to public open space or to well-located inexpensive housing stock applies in cities across the world.Similarly, the authors devote a chapter to work in the Cully neighborhood in Portland, Oregon, where community leaders have instituted a raft of urban environmental improvements while creating and preserving more affordable housing. The idea in the Cully area, as in all these cases, is to make places more resistant to climate change while enhancing them as good places to live for those already there.“Climate adaptation is going to mobilize enormous public and private resources to reshape cities across the globe,” Lamb notes. “These cases suggest pathways where those resources can make cities both more resilient in the face of climate change and more equitable. In fact, these projects show how making cities more equitable can be part of making them more resilient.”Other scholars have praised the book. Eric Klinenberg, director of New York University’s Institute for Public Knowledge has called it “at once scholarly, constructive, and uplifting, a reminder that better, more just cities remain within our reach.”Vale also teaches some of the book’s concepts in his classes, finding that MIT students, wherever they are from, enjoy the idea of thinking creatively about climate resilience.“At MIT, students want to find ways of applying technical skills to urgent global challenges,” Vale says. “I do think there are many opportunities, especially at a time of climate crisis. We try to highlight some of the solutions that are out there. Give us an opportunity, and we’ll show you what a place can be.” More

  • in

    Toward sustainable decarbonization of aviation in Latin America

    According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.SAF emissions, costs, and sourcesUnder an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.RecommendationsThe researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”Financial support for this study was provided by LATAM Airlines and Airbus. More

  • in

    The multifaceted challenge of powering AI

    Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.Unexpected demand brings unexpected solutionsSeveral companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.An unprecedented surge in the demand for electricity“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.Finding clean electricity sourcesTo further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.Reducing electricity demandOther approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.What about the neighbors?A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.MIT’s contributionsAt MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system. More

  • in

    For clean ammonia, MIT engineers propose going underground

    Ammonia is the most widely produced chemical in the world today, used primarily as a source for nitrogen fertilizer. Its production is also a major source of greenhouse gas emissions — the highest in the whole chemical industry.Now, a team of researchers at MIT has developed an innovative way of making ammonia without the usual fossil-fuel-powered chemical plants that require high heat and pressure. Instead, they have found a way to use the Earth itself as a geochemical reactor, producing ammonia underground. The processes uses Earth’s naturally occurring heat and pressure, provided free of charge and free of emissions, as well as the reactivity of minerals already present in the ground.The trick the team devised is to inject water underground, into an area of iron-rich subsurface rock. The water carries with it a source of nitrogen and particles of a metal catalyst, allowing the water to react with the iron to generate clean hydrogen, which in turn reacts with the nitrogen to make ammonia. A second well is then used to pump that ammonia up to the surface.The process, which has been demonstrated in the lab but not yet in a natural setting, is described today in the journal Joule. The paper’s co-authors are MIT professors of materials science and engineering Iwnetim Abate and Ju Li, graduate student Yifan Gao, and five others at MIT.“When I first produced ammonia from rock in the lab, I was so excited,” Gao recalls. “I realized this represented an entirely new and never-reported approach to ammonia synthesis.’”The standard method for making ammonia is called the Haber-Bosch process, which was developed in Germany in the early 20th century to replace natural sources of nitrogen fertilizer such as mined deposits of bat guano, which were becoming depleted. But the Haber-Bosch process is very energy intensive: It requires temperatures of 400 degrees Celsius and pressures of 200 atmospheres, and this means it needs huge installations in order to be efficient. Some areas of the world, such as sub-Saharan Africa and Southeast Asia, have few or no such plants in operation.  As a result, the shortage or extremely high cost of fertilizer in these regions has limited their agricultural production.The Haber-Bosch process “is good. It works,” Abate says. “Without it, we wouldn’t have been able to feed 2 out of the total 8 billion people in the world right now, he says, referring to the portion of the world’s population whose food is grown with ammonia-based fertilizers. But because of the emissions and energy demands, a better process is needed, he says.Burning fuel to generate heat is responsible for about 20 percent of the greenhouse gases emitted from plants using the Haber-Bosch process. Making hydrogen accounts for the remaining 80 percent.  But ammonia, the molecule NH3, is made up only of nitrogen and hydrogen. There’s no carbon in the formula, so where do the carbon emissions come from? The standard way of producing the needed hydrogen is by processing methane gas with steam, breaking down the gas into pure hydrogen, which gets used, and carbon dioxide gas that gets released into the air.Other processes exist for making low- or no-emissions hydrogen, such as by using solar or wind-generated electricity to split water into oxygen and hydrogen, but that process can be expensive. That’s why Abate and his team worked on developing a system to produce what they call geological hydrogen. Some places in the world, including some in Africa, have been found to naturally generate hydrogen underground through chemical reactions between water and iron-rich rocks. These pockets of naturally occurring hydrogen can be mined, just like natural methane reservoirs, but the extent and locations of such deposits are still relatively unexplored.Abate realized this process could be created or enhanced by pumping water, laced with copper and nickel catalyst particles to speed up the process, into the ground in places where such iron-rich rocks were already present. “We can use the Earth as a factory to produce clean flows of hydrogen,” he says.He recalls thinking about the problem of the emissions from hydrogen production for ammonia: “The ‘aha!’ moment for me was thinking, how about we link this process of geological hydrogen production with the process of making Haber-Bosch ammonia?”That would solve the biggest problem of the underground hydrogen production process, which is how to capture and store the gas once it’s produced. Hydrogen is a very tiny molecule — the smallest of them all — and hard to contain. But by implementing the entire Haber-Bosch process underground, the only material that would need to be sent to the surface would be the ammonia itself, which is easy to capture, store, and transport.The only extra ingredient needed to complete the process was the addition of a source of nitrogen, such as nitrate or nitrogen gas, into the water-catalyst mixture being injected into the ground. Then, as the hydrogen gets released from water molecules after interacting with the iron-rich rocks, it can immediately bond with the nitrogen atoms also carried in the water, with the deep underground environment providing the high temperatures and pressures required by the Haber-Bosch process. A second well near the injection well then pumps the ammonia out and into tanks on the surface.“We call this geological ammonia,” Abate says, “because we are using subsurface temperature, pressure, chemistry, and geologically existing rocks to produce ammonia directly.”Whereas transporting hydrogen requires expensive equipment to cool and liquefy it, and virtually no pipelines exist for its transport (except near oil refinery sites), transporting ammonia is easier and cheaper. It’s about one-sixth the cost of transporting hydrogen, and there are already more than 5,000 miles of ammonia pipelines and 10,000 terminals in place in the U.S. alone. What’s more, Abate explains, ammonia, unlike hydrogen, already has a substantial commercial market in place, with production volume projected to grow by two to three times by 2050, as it is used not only for fertilizer but also as feedstock for a wide variety of chemical processes.For example, ammonia can be burned directly in gas turbines, engines, and industrial furnaces, providing a carbon-free alternative to fossil fuels. It is being explored for maritime shipping and aviation as an alternative fuel, and as a possible space propellant.Another upside to geological ammonia is that untreated wastewater, including agricultural runoff, which tends to be rich in nitrogen already, could serve as the water source and be treated in the process. “We can tackle the problem of treating wastewater, while also making something of value out of this waste,” Abate says.Gao adds that this process “involves no direct carbon emissions, presenting a potential pathway to reduce global CO2 emissions by up to 1 percent.” To arrive at this point, he says, the team “overcame numerous challenges and learned from many failed attempts. For example, we tested a wide range of conditions and catalysts before identifying the most effective one.”The project was seed-funded under a flagship project of MIT’s Climate Grand Challenges program, the Center for the Electrification and Decarbonization of Industry. Professor Yet-Ming Chiang, co-director of the center, says “I don’t think there’s been any previous example of deliberately using the Earth as a chemical reactor. That’s one of the key novel points of this approach.”  Chiang emphasizes that even though it is a geological process, it happens very fast, not on geological timescales. “The reaction is fundamentally over in a matter of hours,” he says. “The reaction is so fast that this answers one of the key questions: Do you have to wait for geological times? And the answer is absolutely no.”Professor Elsa Olivetti, a mission director of the newly established Climate Project at MIT, says, “The creative thinking by this team is invaluable to MIT’s ability to have impact at scale. Coupling these exciting results with, for example, advanced understanding of the geology surrounding hydrogen accumulations represent the whole-of-Institute efforts the Climate Project aims to support.”“This is a significant breakthrough for the future of sustainable development,” says Geoffrey Ellis, a geologist at the U.S. Geological Survey, who was not associated with this work. He adds, “While there is clearly more work that needs to be done to validate this at the pilot stage and to get this to the commercial scale, the concept that has been demonstrated is truly transformative.  The approach of engineering a system to optimize the natural process of nitrate reduction by Fe2+ is ingenious and will likely lead to further innovations along these lines.”The initial work on the process has been done in the laboratory, so the next step will be to prove the process using a real underground site. “We think that kind of experiment can be done within the next one to two years,” Abate says. This could open doors to using a similar approach for other chemical production processes, he adds.The team has applied for a patent and aims to work towards bringing the process to market.“Moving forward,” Gao says, “our focus will be on optimizing the process conditions and scaling up tests, with the goal of enabling practical applications for geological ammonia in the near future.”The research team also included Ming Lei, Bachu Sravan Kumar, Hugh Smith, Seok Hee Han, and Lokesh Sangabattula, all at MIT. Additional funding was provided by the National Science Foundation and was carried out, in part, through the use of MIT.nano facilities. More

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    Study shows how households can cut energy costs

    Many people around the globe are living in energy poverty, meaning they spend at least 8 percent of their annual household income on energy. Addressing this problem is not simple, but an experiment by MIT researchers shows that giving people better data about their energy use, plus some coaching on the subject, can lead them to substantially reduce their consumption and costs.The experiment, based in Amsterdam, resulted in households cutting their energy expenses in half, on aggregate — a savings big enough to move three-quarters of them out of energy poverty.“Our energy coaching project as a whole showed a 75 percent success rate at alleviating energy poverty,” says Joseph Llewellyn, a researcher with MIT’s Senseable City Lab and co-author of a newly published paper detailing the experiment’s results.“Energy poverty afflicts families all over the world. With empirical evidence on which policies work, governments could focus their efforts more effectively,” says Fábio Duarte, associate director of MIT’s Senseable City Lab, and another co-author of the paper.The paper, “Assessing the impact of energy coaching with smart technology interventions to alleviate energy poverty,” appears today in Nature Scientific Reports.The authors are Llewellyn, who is also a researcher at the Amsterdam Institute for Advanced Metropolitan Solutions (AMS) and the KTH Royal Institute of Technology in Stockholm; Titus Venverloo, a research fellow at the MIT Senseable City Lab and AMS; Fábio Duarte, who is also a principal researcher MIT’s Senseable City Lab; Carlo Ratti, director of the Senseable City Lab; Cecilia Katzeff; Fredrik Johansson; and Daniel Pargman of the KTH Royal Institute of Technology.The researchers developed the study after engaging with city officials in Amsterdam. In the Netherlands, about 550,000 households, or 7 percent of the population, are considered to be in energy poverty; in the European Union, that figure is about 50 million. In the U.S., separate research has shown that about three in 10 households report trouble paying energy bills.To conduct the experiment, the researchers ran two versions of an energy coaching intervention. In one version, 67 households received one report on their energy usage, along with coaching about how to increase energy efficiency. In the other version, 50 households received those things as well as a smart device giving them real-time updates on their energy consumption. (All households also received some modest energy-savings improvements at the outset, such as additional insulation.)Across the two groups, homes typically reduced monthly consumption of electricity by 33 percent and gas by 42 percent. They lowered their bills by 53 percent, on aggregate, and the percentage of income they spent on energy dropped from 10.1 percent to 5.3 percent.What were these households doing differently? Some of the biggest behavioral changes included things such as only heating rooms that were in use and unplugging devices not being used. Both of those changes save energy, but their benefits were not always understood by residents before they received energy coaching.“The range of energy literacy was quite wide from one home to the next,” Llewellyn says. “And when I went somewhere as an energy coach, it was never to moralize about energy use. I never said, ‘Oh, you’re using way too much.’ It was always working on it with the households, depending on what people need for their homes.”Intriguingly, the homes receiving the small devices that displayed real-time energy data only tended to use them for three or four weeks following a coaching visit. After that, people seemed to lose interest in very frequent monitoring of their energy use. And yet, a few weeks of consulting the devices tended to be long enough to get people to change their habits in a lasting way.“Our research shows that smart devices need to be accompanied by a close understanding of what drives families to change their behaviors,” Venverloo says.As the researchers acknowledge, working with consumers to reduce their energy consumption is just one way to help people escape energy poverty. Other “structural” factors that can help include lower energy prices and more energy-efficient buildings.On the latter note, the current paper has given rise to a new experiment Llewellyn is developing with Amsterdam officials, to examine the benefits of retrofitting residental buildings to lower energy costs. In that case, local policymakers are trying to work out how to fund the retrofitting in such a way that landlords do not simply pass those costs on to tenants.“We don’t want a household to save money on their energy bills if it also means the rent increases, because then we’ve just displaced expenses from one item to another,” Llewellyn says.Households can also invest in products like better insulation themselves, for windows or heating components, although for low-income households, finding the money to pay for such things may not be trivial. That is especially the case, Llewellyn suggests, because energy costs can seem “invisible,” and a lower priority, than feeding and clothing a family.“It’s a big upfront cost for a household that does not have 100 Euros to spend,” Llewellyn says. Compared to paying for other necessities, he notes, “Energy is often the thing that tends to fall last on their list. Energy is always going to be this invisible thing that hides behind the walls, and it’s not easy to change that.”  More

  • in

    Designing tiny filters to solve big problems

    For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.Hooked on researchAs a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”Solving global problemsWhile interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.” More