More stories

  • in

    Taking the “training wheels” off clean energy

    Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”A shifting clean energy landscapeDuring a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”“Everything on the table”The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”The road aheadSeveral speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.” More

  • in

    MIT spinout Gradiant reduces companies’ water use and waste by billions of gallons each day

    When it comes to water use, most of us think of the water we drink. But industrial uses for things like manufacturing account for billions of gallons of water each day. For instance, making a single iPhone, by one estimate, requires more than 3,000 gallons.Gradiant is working to reduce the world’s industrial water footprint. Founded by a team from MIT, Gradiant offers water recycling, treatment, and purification solutions to some of the largest companies on Earth, including Coca Cola, Tesla, and the Taiwan Semiconductor Manufacturing Company. By serving as an end-to-end water company, Gradiant says it helps companies reuse 2 billion gallons of water each day and saves another 2 billion gallons of fresh water from being withdrawn.The company’s mission is to preserve water for generations to come in the face of rising global demand.“We work on both ends of the water spectrum,” Gradiant co-founder and CEO Anurag Bajpayee SM ’08, PhD ’12 says. “We work with ultracontaminated water, and we can also provide ultrapure water for use in areas like chip fabrication. Our specialty is in the extreme water challenges that can’t be solved with traditional technologies.”For each customer, Gradiant builds tailored water treatment solutions that combine chemical treatments with membrane filtration and biological process technologies, leveraging a portfolio of patents to drastically cut water usage and waste.“Before Gradiant, 40 million liters of water would be used in the chip-making process. It would all be contaminated and treated, and maybe 30 percent would be reused,” explains Gradiant co-founder and COO Prakash Govindan PhD ’12. “We have the technology to recycle, in some cases, 99 percent of the water. Now, instead of consuming 40 million liters, chipmakers only need to consume 400,000 liters, which is a huge shift in the water footprint of that industry. And this is not just with semiconductors. We’ve done this in food and beverage, we’ve done this in renewable energy, we’ve done this in pharmaceutical drug production, and several other areas.”Learning the value of waterGovindan grew up in a part of India that experienced a years-long drought beginning when he was 10. Without tap water, one of Govindan’s chores was to haul water up the stairs of his apartment complex each time a truck delivered it.“However much water my brother and I could carry was how much we had for the week,” Govindan recalls. “I learned the value of water the hard way.”Govindan attended the Indian Institute of Technology as an undergraduate, and when he came to MIT for his PhD, he sought out the groups working on water challenges. He began working on a water treatment method called carrier gas extraction for his PhD under Gradiant co-founder and MIT Professor John Lienhard.Bajpayee also worked on water treatment methods at MIT, and after brief stints as postdocs at MIT, he and Govindan licensed their work and founded Gradiant.Carrier gas extraction became Gradiant’s first proprietary technology when the company launched in 2013. The founders began by treating wastewater created by oil and gas wells, landing their first partner in a Texas company. But Gradiant gradually expanded to solving water challenges in power generation, mining, textiles, and refineries. Then the founders noticed opportunities in industries like electronics, semiconductors, food and beverage, and pharmaceuticals. Today, oil and gas wastewater treatment makes up a small percentage of Gradiant’s work.As the company expanded, it added technologies to its portfolio, patenting new water treatment methods around reverse osmosis, selective contaminant extraction, and free radical oxidation. Gradiant has also created a digital system that uses AI to measure, predict, and control water treatment facilities.“The advantage Gradiant has over every other water company is that R&D is in our DNA,” Govindan says, noting Gradiant has a world-class research lab at its headquarters in Boston. “At MIT, we learned how to do cutting-edge technology development, and we never let go of that.”The founders compare their suite of technologies to LEGO bricks they can mix and match depending on a customer’s water needs. Gradiant has built more than 2,500 of these end-to-end systems for customers around the world.“Our customers aren’t water companies; they are industrial clients like semiconductor manufacturers, drug companies, and food and beverage companies,” Bajpayee says. “They aren’t about to start operating a water treatment plant. They look at us as their water partner who can take care of the whole water problem.”Continuing innovationThe founders say Gradiant has been roughly doubling its revenue each year over the last five years, and it’s continuing to add technologies to its platform. For instance, Gradiant recently developed a critical minerals recovery solution to extract materials like lithium and nickel from customers’ wastewater, which could expand access to critical materials essential to the production of batteries and other products.“If we can extract lithium from brine water in an environmentally and economically feasible way, the U.S. can meet all of its lithium needs from within the U.S.,” Bajpayee says. “What’s preventing large-scale extraction of lithium from brine is technology, and we believe what we have now deployed will open the floodgates for direct lithium extraction and completely revolutionized the industry.”The company has also validated a method for eliminating PFAS — so-called toxic “forever chemicals” — in a pilot project with a leading U.S. semiconductor manufacturer. In the near future, it hopes to bring that solution to municipal water treatment plants to protect cities.At the heart of Gradiant’s innovation is the founders’ belief that industrial activity doesn’t have to deplete one of the world’s most vital resources.“Ever since the industrial revolution, we’ve been taking from nature,” Bajpayee says. “By treating and recycling water, by reducing water consumption and making industry highly water efficient, we have this unique opportunity to turn the clock back and give nature water back. If that’s your driver, you can’t choose not to innovate.” More

  • in

    The multifaceted challenge of powering AI

    Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.Unexpected demand brings unexpected solutionsSeveral companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.An unprecedented surge in the demand for electricity“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.Finding clean electricity sourcesTo further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.Reducing electricity demandOther approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.What about the neighbors?A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.MIT’s contributionsAt MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system. More

  • in

    Explained: Generative AI’s environmental impact

    In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.Demanding data centersThe electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.Increasing impacts from inferenceOnce a generative AI model is trained, the energy demands don’t disappear.Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.The computing hardware inside data centers brings its own, less direct environmental impacts.While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says. More

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    A nonflammable battery to power a safer, decarbonized future

    Lithium-ion batteries are the workhorses of home electronics and are powering an electric revolution in transportation. But they are not suitable for every application.A key drawback is their flammability and toxicity, which make large-scale lithium-ion energy storage a bad fit in densely populated city centers and near metal processing or chemical manufacturing plants.Now Alsym Energy has developed a nonflammable, nontoxic alternative to lithium-ion batteries to help renewables like wind and solar bridge the gap in a broader range of sectors. The company’s electrodes use relatively stable, abundant materials, and its electrolyte is primarily water with some nontoxic add-ons.“Renewables are intermittent, so you need storage, and to really solve the decarbonization problem, we need to be able to make these batteries anywhere at low cost,” says Alsym co-founder and MIT Professor Kripa Varanasi.The company believes its batteries, which are currently being tested by potential customers around the world, hold enormous potential to decarbonize the high-emissions industrial manufacturing sector, and they see other applications ranging from mining to powering data centers, homes, and utilities.“We are enabling a decarbonization of markets that was not possible before,” Alsym co-founder and CEO Mukesh Chatter says. “No chemical or steel plant would dare put a lithium battery close to their premises because of the flammability, and industrial emissions are a much bigger problem than passenger cars. With this approach, we’re able to offer a new path.”Helping 1 billion peopleChatter started a telecommunications company with serial entrepreneurs and longtime members of the MIT community Ray Stata ’57, SM ’58 and Alec Dingee ’52 in 1997. Since the company was acquired in 1999, Chatter and his wife have started other ventures and invested in some startups, but after losing his mother to cancer in 2012, Chatter decided he wanted to maximize his impact by only working on technologies that could reach 1 billion people or more.The problem Chatter decided to focus on was electricity access.“The intent was to light up the homes of at least 1 billion people around the world who either did not have electricity, or only got it part of the time, condemning them basically to a life of poverty in the 19th century,” Chatter says. “When you don’t have access to electricity, you also don’t have the internet, cell phones, education, etc.”To solve the problem, Chatter decided to fund research into a new kind of battery. The battery had to be cheap enough to be adopted in low-resource settings, safe enough to be deployed in crowded areas, and work well enough to support two light bulbs, a fan, a refrigerator, and an internet modem.At first, Chatter was surprised how few takers he had to start the research, even from researchers at the top universities in the world.“It’s a burning problem, but the risk of failure was so high that nobody wanted to take the chance,” Chatter recalls.He finally found his partners in Varanasi, Rensselaer Polytechnic Institute Professor Nikhil Koratkar and Rensselaer researcher Rahul Mukherjee. Varanasi, who notes he’s been at MIT for 22 years, says the Institute’s culture gave him the confidence to tackle big problems.“My students, postdocs, and colleagues are inspirational to me,” he says. “The MIT ecosystem infuses us with this resolve to go after problems that look insurmountable.”Varanasi leads an interdisciplinary lab at MIT dedicated to understanding physicochemical and biological phenomena. His research has spurred the creation of materials, devices, products, and processes to tackle challenges in energy, agriculture, and other sectors, as well as startup companies to commercialize this work.“Working at the interfaces of matter has unlocked numerous new research pathways across various fields, and MIT has provided me the creative freedom to explore, discover, and learn, and apply that knowledge to solve critical challenges,” he says. “I was able to draw significantly from my learnings as we set out to develop the new battery technology.”Alsym’s founding team began by trying to design a battery from scratch based on new materials that could fit the parameters defined by Chatter. To make it nonflammable and nontoxic, the founders wanted to avoid lithium and cobalt.After evaluating many different chemistries, the founders settled on Alsym’s current approach, which was finalized in 2020.Although the full makeup of Alsym’s battery is still under wraps as the company waits to be granted patents, one of Alsym’s electrodes is made mostly of manganese oxide while the other is primarily made of a metal oxide. The electrolyte is primarily water.There are several advantages to Alsym’s new battery chemistry. Because the battery is inherently safer and more sustainable than lithium-ion, the company doesn’t need the same safety protections or cooling equipment, and it can pack its batteries close to each other without fear of fires or explosions. Varanasi also says the battery can be manufactured in any of today’s lithium-ion plants with minimal changes and at significantly lower operating cost.“We are very excited right now,” Chatter says. “We started out wanting to light up 1 billion people’s homes, and now in addition to the original goal we have a chance to impact the entire globe if we are successful at cutting back industrial emissions.”A new platform for energy storageAlthough the batteries don’t quite reach the energy density of lithium-ion batteries, Varanasi says Alsym is first among alternative chemistries at the system-level. He says 20-foot containers of Alsym’s batteries can provide 1.7 megawatt hours of electricity. The batteries can also fast-charge over four hours and can be configured to discharge over anywhere from two to 110 hours.“We’re highly configurable, and that’s important because depending on where you are, you can sometimes run on two cycles a day with solar, and in combination with wind, you could truly get 24/7 electricity,” Chatter says. “The need to do multiday or long duration storage is a small part of the market, but we support that too.”Alsym has been manufacturing prototypes at a small facility in Woburn, Massachusetts, for the last two years, and early this year it expanded its capacity and began to send samples to customers for field testing.In addition to large utilities, the company is working with municipalities, generator manufacturers, and providers of behind-the-meter power for residential and commercial buildings. The company is also in discussion with a large chemical manufacturers and metal processing plants to provide energy storage system to reduce their carbon footprint, something they say was not feasible with lithium-ion batteries, due to their flammability, or with nonlithium batteries, due to their large space requirements.Another critical area is data centers. With the growth of AI, the demand for data centers — and their energy consumption — is set to surge.“We must power the AI and digitization revolution without compromising our planet,” says Varanasi, adding that lithium batteries are unsuitable for co-location with data centers due to flammability risks. “Alsym batteries are well-positioned to offer a safer, more sustainable alternative. Intermittency is also a key issue for electrolyzers used in green hydrogen production and other markets.”Varanasi sees Alsym as a platform company, and Chatter says Alsym is already working on other battery chemistries that have higher densities and maintain performance at even more extreme temperatures.“When you use a single material in any battery, and the whole world starts to use it, you run out of that material,” Varanasi says. “What we have is a platform that has enabled us to not just to come up with just one chemistry, but at least three or four chemistries targeted at different applications so no one particular set of materials will be stressed in terms of supply.” More

  • in

    Two MIT teams selected for NSF sustainable materials grants

    Two teams led by MIT researchers were selected in December 2023 by the U.S. National Science Foundation (NSF) Convergence Accelerator, a part of the TIP Directorate, to receive awards of $5 million each over three years, to pursue research aimed at helping to bring cutting-edge new sustainable materials and processes from the lab into practical, full-scale industrial production. The selection was made after 16 teams from around the country were chosen last year for one-year grants to develop detailed plans for further research aimed at solving problems of sustainability and scalability for advanced electronic products.

    Of the two MIT-led teams chosen for this current round of funding, one team, Topological Electric, is led by Mingda Li, an associate professor in the Department of Nuclear Science and Engineering. This team will be finding pathways to scale up sustainable topological materials, which have the potential to revolutionize next-generation microelectronics by showing superior electronic performance, such as dissipationless states or high-frequency response. The other team, led by Anuradha Agarwal, a principal research scientist at MIT’s Materials Research Laboratory, will be focusing on developing new materials, devices, and manufacturing processes for microchips that minimize energy consumption using electronic-photonic integration, and that detect and avoid the toxic or scarce materials used in today’s production methods.

    Scaling the use of topological materials

    Li explains that some materials based on quantum effects have achieved successful transitions from lab curiosities to successful mass production, such as blue-light LEDs, and giant magnetorestance (GMR) devices used for magnetic data storage. But he says there are a variety of equally promising materials that have shown promise but have yet to make it into real-world applications.

    “What we really wanted to achieve is to bring newer-generation quantum materials into technology and mass production, for the benefit of broader society,” he says. In particular, he says, “topological materials are really promising to do many different things.”

    Topological materials are ones whose electronic properties are fundamentally protected against disturbance. For example, Li points to the fact that just in the last two years, it has been shown that some topological materials are even better electrical conductors than copper, which are typically used for the wires interconnecting electronic components. But unlike the blue-light LEDs or the GMR devices, which have been widely produced and deployed, when it comes to topological materials, “there’s no company, no startup, there’s really no business out there,” adds Tomas Palacios, the Clarence J. Lebel Professor in Electrical Engineering at MIT and co-principal investigator on Li’s team. Part of the reason is that many versions of such materials are studied “with a focus on fundamental exotic physical properties with little or no consideration on the sustainability aspects,” says Liang Fu, an MIT professor of physics and also a co-PI. Their team will be looking for alternative formulations that are more amenable to mass production.

    One possible application of these topological materials is for detecting terahertz radiation, explains Keith Nelson, an MIT professor of chemistry and co-PI. This extremely high-frequency electronics can carry far more information than conventional radio or microwaves, but at present there are no mature electronic devices available that are scalable at this frequency range. “There’s a whole range of possibilities for topological materials” that could work at these frequencies, he says. In addition, he says, “we hope to demonstrate an entire prototype system like this in a single, very compact solid-state platform.”

    Li says that among the many possible applications of topological devices for microelectronics devices of various kinds, “we don’t know which, exactly, will end up as a product, or will reach real industrial scaleup. That’s why this opportunity from NSF is like a bridge, which is precious, to allow us to dig deeper to unleash the true potential.”

    In addition to Li, Palacios, Fu, and Nelson, the Topological Electric team includes Qiong Ma, assistant professor of physics in Boston College; Farnaz Niroui, assistant professor of electrical engineering and computer science at MIT; Susanne Stemmer, professor of materials at the University of California at Santa Barbara; Judy Cha, professor of materials science and engineering at Cornell University; industrial partners including IBM, Analog Devices, and Raytheon; and professional consultants. “We are taking this opportunity seriously,” Li says. “We really want to see if the topological materials are as good as we show in the lab when being scaled up, and how far we can push to broadly industrialize them.”

    Toward sustainable microchip production and use

    The microchips behind everything from smartphones to medical imaging are associated with a significant percentage of greenhouse gas emissions today, and every year the world produces more than 50 million metric tons of electronic waste, the equivalent of about 5,000 Eiffel Towers. Further, the data centers necessary for complex computations and huge amount of data transfer — think AI and on-demand video — are growing and will require 10 percent of the world’s electricity by 2030.

    “The current microchip manufacturing supply chain, which includes production, distribution, and use, is neither scalable nor sustainable, and cannot continue. We must innovate our way out of this crisis,” says Agarwal.

    The name of Agarwal’s team, FUTUR-IC, is a reference to the future of the integrated circuits, or chips, through a global alliance for sustainable microchip manufacturing. Says Agarwal, “We bring together stakeholders from industry, academia, and government to co-optimize across three dimensions: technology, ecology, and workforce. These were identified as key interrelated areas by some 140 stakeholders. With FUTUR-IC we aim to cut waste and CO2-equivalent emissions associated with electronics by 50 percent every 10 years.”

    The market for microelectronics in the next decade is predicted to be on the order of a trillion dollars, but most of the manufacturing for the industry occurs only in limited geographical pockets around the world. FUTUR-IC aims to diversify and strengthen the supply chain for manufacturing and packaging of electronics. The alliance has 26 collaborators and is growing. Current external collaborators include the International Electronics Manufacturing Initiative (iNEMI), Tyndall National Institute, SEMI, Hewlett Packard Enterprise, Intel, and the Rochester Institute of Technology.

    Agarwal leads FUTUR-IC in close collaboration with others, including, from MIT, Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering; Elsa Olivetti, the Jerry McAfee Professor in Engineering; Randolph Kirchain, principal research scientist in the Materials Research Laboratory; and Greg Norris, director of MIT’s Sustainability and Health Initiative for NetPositive Enterprise (SHINE). All are affiliated with the Materials Research Laboratory. They are joined by Samuel Serna, an MIT visiting professor and assistant professor of physics at Bridgewater State University. Other key personnel include Sajan Saini, education director for the Initiative for Knowledge and Innovation in Manufacturing in MIT’s Department of Materials Science and Engineering; Peter O’Brien, a professor from Tyndall National Institute; and Shekhar Chandrashekhar, CEO of iNEMI.

    “We expect the integration of electronics and photonics to revolutionize microchip manufacturing, enhancing efficiency, reducing energy consumption, and paving the way for unprecedented advancements in computing speed and data-processing capabilities,” says Serna, who is the co-lead on the project’s technology “vector.”

    Common metrics for these efforts are needed, says Norris, co-lead for the ecology vector, adding, “The microchip industry must have transparent and open Life Cycle Assessment (LCA) models and data, which are being developed by FUTUR-IC.” This is especially important given that microelectronics production transcends industries. “Given the scale and scope of microelectronics, it is critical for the industry to lead in the transition to sustainable manufacture and use,” says Kirchain, another co-lead and the co-director of the Concrete Sustainability Hub at MIT. To bring about this cross-fertilization, co-lead Olivetti, also co-director of the MIT Climate and Sustainability Consortium (MCSC), will collaborate with FUTUR-IC to enhance the benefits from microchip recycling, leveraging the learning across industries.

    Saini, the co-lead for the workforce vector, stresses the need for agility. “With a workforce that adapts to a practice of continuous upskilling, we can help increase the robustness of the chip-manufacturing supply chain, and validate a new design for a sustainability curriculum,” he says.

    “We have become accustomed to the benefits forged by the exponential growth of microelectronic technology performance and market size,” says Kimerling, who is also director of MIT’s Materials Research Laboratory and co-director of the MIT Microphotonics Center. “The ecological impact of this growth in terms of materials use, energy consumption and end-of-life disposal has begun to push back against this progress. We believe that concurrently engineered solutions for these three dimensions will build a common learning curve to power the next 40 years of progress in the semiconductor industry.”

    The MIT teams are two of six that received awards addressing sustainable materials for global challenges through phase two of the NSF Convergence Accelerator program. Launched in 2019, the program targets solutions to especially compelling challenges at an accelerated pace by incorporating a multidisciplinary research approach. More

  • in

    Propelling atomically layered magnets toward green computers

    Globally, computation is booming at an unprecedented rate, fueled by the boons of artificial intelligence. With this, the staggering energy demand of the world’s computing infrastructure has become a major concern, and the development of computing devices that are far more energy-efficient is a leading challenge for the scientific community. 

    Use of magnetic materials to build computing devices like memories and processors has emerged as a promising avenue for creating “beyond-CMOS” computers, which would use far less energy compared to traditional computers. Magnetization switching in magnets can be used in computation the same way that a transistor switches from open or closed to represent the 0s and 1s of binary code. 

    While much of the research along this direction has focused on using bulk magnetic materials, a new class of magnetic materials — called two-dimensional van der Waals magnets — provides superior properties that can improve the scalability and energy efficiency of magnetic devices to make them commercially viable. 

    Although the benefits of shifting to 2D magnetic materials are evident, their practical induction into computers has been hindered by some fundamental challenges. Until recently, 2D magnetic materials could operate only at very low temperatures, much like superconductors. So bringing their operating temperatures above room temperature has remained a primary goal. Additionally, for use in computers, it is important that they can be controlled electrically, without the need for magnetic fields. Bridging this fundamental gap, where 2D magnetic materials can be electrically switched above room temperature without any magnetic fields, could potentially catapult the translation of 2D magnets into the next generation of “green” computers.

    A team of MIT researchers has now achieved this critical milestone by designing a “van der Waals atomically layered heterostructure” device where a 2D van der Waals magnet, iron gallium telluride, is interfaced with another 2D material, tungsten ditelluride. In an open-access paper published March 15 in Science Advances, the team shows that the magnet can be toggled between the 0 and 1 states simply by applying pulses of electrical current across their two-layer device. 

    Play video

    The Future of Spintronics: Manipulating Spins in Atomic Layers without External Magnetic FieldsVideo: Deblina Sarkar

    “Our device enables robust magnetization switching without the need for an external magnetic field, opening up unprecedented opportunities for ultra-low power and environmentally sustainable computing technology for big data and AI,” says lead author Deblina Sarkar, the AT&T Career Development Assistant Professor at the MIT Media Lab and Center for Neurobiological Engineering, and head of the Nano-Cybernetic Biotrek research group. “Moreover, the atomically layered structure of our device provides unique capabilities including improved interface and possibilities of gate voltage tunability, as well as flexible and transparent spintronic technologies.”

    Sarkar is joined on the paper by first author Shivam Kajale, a graduate student in Sarkar’s research group at the Media Lab; Thanh Nguyen, a graduate student in the Department of Nuclear Science and Engineering (NSE); Nguyen Tuan Hung, an MIT visiting scholar in NSE and an assistant professor at Tohoku University in Japan; and Mingda Li, associate professor of NSE.

    Breaking the mirror symmetries 

    When electric current flows through heavy metals like platinum or tantalum, the electrons get segregated in the materials based on their spin component, a phenomenon called the spin Hall effect, says Kajale. The way this segregation happens depends on the material, and particularly its symmetries.

    “The conversion of electric current to spin currents in heavy metals lies at the heart of controlling magnets electrically,” Kajale notes. “The microscopic structure of conventionally used materials, like platinum, have a kind of mirror symmetry, which restricts the spin currents only to in-plane spin polarization.”

    Kajale explains that two mirror symmetries must be broken to produce an “out-of-plane” spin component that can be transferred to a magnetic layer to induce field-free switching. “Electrical current can ‘break’ the mirror symmetry along one plane in platinum, but its crystal structure prevents the mirror symmetry from being broken in a second plane.”

    In their earlier experiments, the researchers used a small magnetic field to break the second mirror plane. To get rid of the need for a magnetic nudge, Kajale and Sarkar and colleagues looked instead for a material with a structure that could break the second mirror plane without outside help. This led them to another 2D material, tungsten ditelluride. The tungsten ditelluride that the researchers used has an orthorhombic crystal structure. The material itself has one broken mirror plane. Thus, by applying current along its low-symmetry axis (parallel to the broken mirror plane), the resulting spin current has an out-of-plane spin component that can directly induce switching in the ultra-thin magnet interfaced with the tungsten ditelluride. 

    “Because it’s also a 2D van der Waals material, it can also ensure that when we stack the two materials together, we get pristine interfaces and a good flow of electron spins between the materials,” says Kajale. 

    Becoming more energy-efficient 

    Computer memory and processors built from magnetic materials use less energy than traditional silicon-based devices. And the van der Waals magnets can offer higher energy efficiency and better scalability compared to bulk magnetic material, the researchers note. 

    The electrical current density used for switching the magnet translates to how much energy is dissipated during switching. A lower density means a much more energy-efficient material. “The new design has one of the lowest current densities in van der Waals magnetic materials,” Kajale says. “This new design has an order of magnitude lower in terms of the switching current required in bulk materials. This translates to something like two orders of magnitude improvement in energy efficiency.”

    The research team is now looking at similar low-symmetry van der Waals materials to see if they can reduce current density even further. They are also hoping to collaborate with other researchers to find ways to manufacture the 2D magnetic switch devices at commercial scale. 

    This work was carried out, in part, using the facilities at MIT.nano. It was funded by the Media Lab, the U.S. National Science Foundation, and the U.S. Department of Energy. More