More stories

  • in

    Taking the “training wheels” off clean energy

    Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”A shifting clean energy landscapeDuring a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”“Everything on the table”The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”The road aheadSeveral speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.” More

  • in

    Surprise discovery could lead to improved catalysts for industrial reactions

    The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”This work is “illuminating, something that will be worth teaching at the undergraduate level,” says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. … [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation. More

  • in

    Collaboration between MIT and GE Vernova aims to develop and scale sustainable energy systems

    MIT and GE Vernova today announced the creation of the MIT-GE Vernova Energy and Climate Alliance to help develop and scale sustainable energy systems across the globe.The alliance launches a five-year collaboration between MIT and GE Vernova, a global energy company that spun off from General Electric’s energy business in 2024. The endeavor will encompass research, education, and career opportunities for students, faculty, and staff across MIT’s five schools and the MIT Schwarzman College of Computing. It will focus on three main themes: decarbonization, electrification, and renewables acceleration.“This alliance will provide MIT students and researchers with a tremendous opportunity to work on energy solutions that could have real-world impact,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of the School of Engineering. “GE Vernova brings domain knowledge and expertise deploying these at scale. When our researchers develop new innovative technologies, GE Vernova is strongly positioned to bring them to global markets.”Through the alliance, GE Vernova is sponsoring research projects at MIT and providing philanthropic support for MIT research fellowships. The company will also engage with MIT’s community through participation in corporate membership programs and professional education.“It’s a privilege to combine forces with MIT’s world-class faculty and students as we work together to realize an optimistic, innovation-driven approach to solving the world’s most pressing challenges,” says Scott Strazik, GE Vernova CEO. “Through this alliance, we are proud to be able to help drive new technologies while at the same time inspire future leaders to play a meaningful role in deploying technology to improve the planet at companies like GE Vernova.”“This alliance embodies the spirit of the MIT Climate Project — combining cutting-edge research, a shared drive to tackle today’s toughest energy challenges, and a deep sense of optimism about what we can achieve together,” says Sally Kornbluth, president of MIT. “With the combined strengths of MIT and GE Vernova, we have a unique opportunity to make transformative progress in the flagship areas of electrification, decarbonization, and renewables acceleration.”The alliance, comprising a $50 million commitment, will operate within MIT’s Office of Innovation and Strategy. It will fund approximately 12 annual research projects relating to the three themes, as well as three master’s student projects in MIT’s Technology and Policy Program. The research projects will address challenges like developing and storing clean energy, as well as the creation of robust system architectures that help sustainable energy sources like solar, wind, advanced nuclear reactors, green hydrogen, and more compete with carbon-emitting sources.The projects will be selected by a joint steering committee composed of representatives from MIT and GE Vernova, following an annual Institute-wide call for proposals.The collaboration will also create approximately eight endowed GE Vernova research fellowships for MIT students, to be selected by faculty and beginning in the fall. There will also be 10 student internships that will span GE Vernova’s global operations, and GE Vernova will also sponsor programming through MIT’s New Engineering Education Transformation (NEET), which equips students with career-oriented experiential opportunities. Additionally, the alliance will create professional education programming for GE Vernova employees.“The internships and fellowships will be designed to bring students into our ecosystem,” says GE Vernova Chief Corporate Affairs Officer Roger Martella. “Students will walk our factory floor, come to our labs, be a part of our management teams, and see how we operate as business leaders. They’ll get a sense for how what they’re learning in the classroom is being applied in the real world.”Philanthropic support from GE Vernova will also support projects in MIT’s Human Insight Collaborative (MITHIC), which launched last fall to elevate human-centered research and teaching. The projects will allow faculty to explore how areas like energy and cybersecurity influence human behavior and experiences.In connection with the alliance, GE Vernova is expected to join several MIT consortia and membership programs, helping foster collaborations and dialogue between industry experts and researchers and educators across campus.With operations across more than 100 countries, GE Vernova designs, manufactures, and services technologies to generate, transfer, and store electricity with a mission to decarbonize the world. The company is headquartered in Kendall Square, right down the road from MIT, which its leaders say is not a coincidence.“We’re really good at taking proven technologies and commercializing them and scaling them up through our labs,” Martella says. “MIT excels at coming up with those ideas and being a sort of time machine that thinks outside the box to create the future. That’s why this such a great fit: We both have a commitment to research, innovation, and technology.”The alliance is the latest in MIT’s rapidly growing portfolio of research and innovation initiatives around sustainable energy systems, which also includes the Climate Project at MIT. Separate from, but complementary to, the MIT-GE Vernova Alliance, the Climate Project is a campus-wide effort to develop technological, behavioral, and policy solutions to some of the toughest problems impeding an effective global climate response. More

  • in

    Making solar projects cheaper and faster with portable factories

    As the price of solar panels has plummeted in recent decades, installation costs have taken up a greater share of the technology’s overall price tag. The long installation process for solar farms is also emerging as a key bottleneck in the deployment of solar energy.Now the startup Charge Robotics is developing solar installation factories to speed up the process of building large-scale solar farms. The company’s factories are shipped to the site of utility solar projects, where equipment including tracks, mounting brackets, and panels are fed into the system and automatically assembled. A robotic vehicle autonomously puts the finished product — which amounts to a completed section of solar farm — in its final place.“We think of this as the Henry Ford moment for solar,” says CEO Banks Hunter ’15, who founded Charge Robotics with fellow MIT alumnus Max Justicz ’17. “We’re going from a very bespoke, hands on, manual installation process to something much more streamlined and set up for mass manufacturing. There are all kinds of benefits that come along with that, including consistency, quality, speed, cost, and safety.”Last year, solar energy accounted for 81 percent of new electric capacity in the U.S., and Hunter and Justicz see their factories as necessary for continued acceleration in the industry.The founders say they were met with skepticism when they first unveiled their plans. But in the beginning of last year, they deployed a prototype system that successfully built a solar farm with SOLV Energy, one of the largest solar installers in the U.S. Now, Charge has raised $22 million for its first commercial deployments later this year.From surgical robots to solar robotsWhile majoring in mechanical engineering at MIT, Hunter found plenty of excuses to build things. One such excuse was Course 2.009 (Product Engineering Processes), where he and his classmates built a smart watch for communication in remote areas.After graduation, Hunter worked for the MIT alumni-founded startups Shaper Tools and Vicarious Surgical. Vicarious Surgical is a medical robotics company that has raised more than $450 million to date. Hunter was the second employee and worked there for five years.“A lot of really hands on, project-based classes at MIT translated directly into my first roles coming out of school and set me up to be very independent and run large engineering projects,” Hunter says, “Course 2.009, in particular, was a big launch point for me. The founders of Vicarious Surgical got in touch with me through the 2.009 network.”As early as 2017, Hunter and Justicz, who majored in mechanical engineering and computer science, had discussed starting a company together. But they had to decide where to apply their broad engineering and product skillsets.“Both of us care a lot about climate change. We see climate change as the biggest problem impacting the greatest number of people on the planet,” Hunter says. “Our mentality was if we can build anything, we might as well build something that really matters.”In the process of cold calling hundreds of people in the energy industry, the founders decided solar was the future of energy production because its price was decreasing so quickly.“It’s becoming cheaper faster than any other form of energy production in human history,” Hunter says.When the founders began visiting construction sites for the large, utility-scale solar farms that make up the bulk of energy generation, it wasn’t hard to find the bottlenecks. The first site they traveled to was in the Mojave Desert in California. Hunter describes it as a massive dust bowl where thousands of workers spent months repeating tasks like moving material and assembling the same parts, over and over again.“The site had something like 2 million panels on it, and every single one was assembled and fastened the same way by hand,” Hunter says. “Max and I thought it was insane. There’s no way that can scale to transform the energy grid in a short window of time.”Hunter says he heard from each of the largest solar companies in the U.S. that their biggest limitation for scaling was labor shortages. The problem was slowing growth and killing projects.Hunter and Justicz founded Charge Robotics in 2021 to break through that bottleneck. Their first step was to order utility solar parts and assemble them by hand in their backyards.“From there, we came up with this portable assembly line that we could ship out to construction sites and then feed in the entire solar system, including the steel tracks, mounting brackets, fasteners, and the solar panels,” Hunter explains. “The assembly line robotically assembles all those pieces to produce completed solar bays, which are chunks of a solar farm.”

    Charge Robotics’ machine transports an autonomously assembled portion of solar farm to its final place in a solar farm.

    Credit: Courtesy of Charge Robotics

    Previous item
    Next item

    Each bay represents a 40-foot piece of the solar farm and weighs about 800 pounds. A robotic vehicle brings it to its final location in the field. Hunter says Charge’s system automates all mechanical installation except for the process of pile driving the first metal stakes into the ground.Charge’s assembly lines also have machine-vision systems that scan each part to ensure quality, and the systems work with the most common solar parts and panel sizes.From pilot to productWhen the founders started pitching their plans to investors and construction companies, people didn’t believe it was possible.“The initial feedback was basically, ‘This will never work,’” Hunter says. “But as soon as we took our first system out into the field and people saw it operating, they got much more excited and started believing it was real.”Since that first deployment, Charge’s team has been making its system faster and easier to operate. The company plans to set up its factories at project sites and run them in partnership with solar construction companies. The factories could even run alongside human workers.“With our system, people are operating robotic equipment remotely rather than putting in the screws themselves,” Hunter explains. “We can essentially deliver the assembled solar to customers. Their only responsibility is to deliver the materials and parts on big pallets that we feed into our system.”Hunter says multiple factories could be deployed at the same site and could also operate 24/7 to dramatically speed up projects.“We are hitting the limits of solar growth because these companies don’t have enough people,” Hunter says. “We can build much bigger sites much faster with the same number of people by just shipping out more of our factories. It’s a fundamentally new way of scaling solar energy.” More

  • in

    Developing materials for stellar performance in fusion power plants

    When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.On track to an MIT educationGrowing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.Pursuing fusion at NSEBefore she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.Sailing new pathsIt’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today. More

  • in

    Will neutrons compromise the operation of superconducting magnets in a fusion plant?

    High-temperature superconducting magnets made from REBCO, an acronym for rare earth barium copper oxide, make it possible to create an intense magnetic field that can confine the extremely hot plasma needed for fusion reactions, which combine two hydrogen atoms to form an atom of helium, releasing a neutron in the process.But some early tests suggested that neutron irradiation inside a fusion power plant might instantaneously suppress the superconducting magnets’ ability to carry current without resistance (called critical current), potentially causing a reduction in the fusion power output.Now, a series of experiments has clearly demonstrated that this instantaneous effect of neutron bombardment, known as the “beam on effect,” should not be an issue during reactor operation, thus clearing the path for projects such as the ARC fusion system being developed by MIT spinoff company Commonwealth Fusion Systems.The findings were reported in the journal Superconducting Science and Technology, in a paper by MIT graduate student Alexis Devitre and professors Michael Short, Dennis Whyte, and Zachary Hartwig, along with six others.“Nobody really knew if it would be a concern,” Short explains. He recalls looking at these early findings: “Our group thought, man, somebody should really look into this. But now, luckily, the result of the paper is: It’s conclusively not a concern.”The possible issue first arose during some initial tests of the REBCO tapes planned for use in the ARC system. “I can remember the night when we first tried the experiment,” Devitre recalls. “We were all down in the accelerator lab, in the basement. It was a big shocker because suddenly the measurement we were looking at, the critical current, just went down by 30 percent” when it was measured under radiation conditions (approximating those of the fusion system), as opposed to when it was only measured after irradiation.Before that, researchers had irradiated the REBCO tapes and then tested them afterward, Short says. “We had the idea to measure while irradiating, the way it would be when the reactor’s really on,” he says. “And then we observed this giant difference, and we thought, oh, this is a big deal. It’s a margin you’d want to know about if you’re designing a reactor.”After a series of carefully calibrated tests, it turned out the drop in critical current was not caused by the irradiation at all, but was just an effect of temperature changes brought on by the proton beam used for the irradiation experiments. This is something that would not be a factor in an actual fusion plant, Short says.“We repeated experiments ‘oh so many times’ and collected about a thousand data points,” Devitre says. They then went through a detailed statistical analysis to show that the effects were exactly the same, under conditions where the material was just heated as when it was both heated and irradiated.This excluded the possibility that the instantaneous suppression of the critical current had anything to do with the “beam on effect,” at least within the sensitivity of their tests. “Our experiments are quite sensitive,” Short says. “We can never say there’s no effect, but we can say that there’s no important effect.”To carry out these tests required building a special facility for the purpose. Only a few such facilities exist in the world. “They’re all custom builds, and without this, we wouldn’t have been able to find out the answer,” he says.The finding that this specific issue is not a concern for the design of fusion plants “illustrates the power of negative results. If you can conclusively prove that something doesn’t happen, you can stop scientists from wasting their time hunting for something that doesn’t exist.” And in this case, Short says, “You can tell the fusion companies: ‘You might have thought this effect would be real, but we’ve proven that it’s not, and you can ignore it in your designs.’ So that’s one more risk retired.”That could be a relief to not only Commonwealth Fusion Systems but also several other companies that are also pursuing fusion plant designs, Devitre says. “There’s a bunch. And it’s not just fusion companies,” he adds. There remains the important issue of longer-term degradation of the REBCO that would occur over years or decades, which the group is presently investigating. Others are pursuing the use of these magnets for satellite thrusters and particle accelerators to study subatomic physics, where the effect could also have been a concern. For all these uses, “this is now one less thing to be concerned about,” Devitre says.The research team also included David Fischer, Kevin Woller, Maxwell Rae, Lauryn Kortman, and Zoe Fisher at MIT, and N. Riva at Proxima Fusion in Germany. This research was supported by Eni S.p.A. through the MIT Energy Initiative. More

  • in

    Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

    There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.Power boostThe team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.Attack modeIn their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative. More

  • in

    Chip-based system for terahertz waves could enable more efficient, sensitive electronics

    The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagel, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.Making wavesTerahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.But a property known as the dielectric constant gets in the way of a smooth transmission.The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal. The MIT researchers approached this problem differently.They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.A scalable approachThey chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program. More