More stories

  • in

    Making solar projects cheaper and faster with portable factories

    As the price of solar panels has plummeted in recent decades, installation costs have taken up a greater share of the technology’s overall price tag. The long installation process for solar farms is also emerging as a key bottleneck in the deployment of solar energy.Now the startup Charge Robotics is developing solar installation factories to speed up the process of building large-scale solar farms. The company’s factories are shipped to the site of utility solar projects, where equipment including tracks, mounting brackets, and panels are fed into the system and automatically assembled. A robotic vehicle autonomously puts the finished product — which amounts to a completed section of solar farm — in its final place.“We think of this as the Henry Ford moment for solar,” says CEO Banks Hunter ’15, who founded Charge Robotics with fellow MIT alumnus Max Justicz ’17. “We’re going from a very bespoke, hands on, manual installation process to something much more streamlined and set up for mass manufacturing. There are all kinds of benefits that come along with that, including consistency, quality, speed, cost, and safety.”Last year, solar energy accounted for 81 percent of new electric capacity in the U.S., and Hunter and Justicz see their factories as necessary for continued acceleration in the industry.The founders say they were met with skepticism when they first unveiled their plans. But in the beginning of last year, they deployed a prototype system that successfully built a solar farm with SOLV Energy, one of the largest solar installers in the U.S. Now, Charge has raised $22 million for its first commercial deployments later this year.From surgical robots to solar robotsWhile majoring in mechanical engineering at MIT, Hunter found plenty of excuses to build things. One such excuse was Course 2.009 (Product Engineering Processes), where he and his classmates built a smart watch for communication in remote areas.After graduation, Hunter worked for the MIT alumni-founded startups Shaper Tools and Vicarious Surgical. Vicarious Surgical is a medical robotics company that has raised more than $450 million to date. Hunter was the second employee and worked there for five years.“A lot of really hands on, project-based classes at MIT translated directly into my first roles coming out of school and set me up to be very independent and run large engineering projects,” Hunter says, “Course 2.009, in particular, was a big launch point for me. The founders of Vicarious Surgical got in touch with me through the 2.009 network.”As early as 2017, Hunter and Justicz, who majored in mechanical engineering and computer science, had discussed starting a company together. But they had to decide where to apply their broad engineering and product skillsets.“Both of us care a lot about climate change. We see climate change as the biggest problem impacting the greatest number of people on the planet,” Hunter says. “Our mentality was if we can build anything, we might as well build something that really matters.”In the process of cold calling hundreds of people in the energy industry, the founders decided solar was the future of energy production because its price was decreasing so quickly.“It’s becoming cheaper faster than any other form of energy production in human history,” Hunter says.When the founders began visiting construction sites for the large, utility-scale solar farms that make up the bulk of energy generation, it wasn’t hard to find the bottlenecks. The first site they traveled to was in the Mojave Desert in California. Hunter describes it as a massive dust bowl where thousands of workers spent months repeating tasks like moving material and assembling the same parts, over and over again.“The site had something like 2 million panels on it, and every single one was assembled and fastened the same way by hand,” Hunter says. “Max and I thought it was insane. There’s no way that can scale to transform the energy grid in a short window of time.”Hunter says he heard from each of the largest solar companies in the U.S. that their biggest limitation for scaling was labor shortages. The problem was slowing growth and killing projects.Hunter and Justicz founded Charge Robotics in 2021 to break through that bottleneck. Their first step was to order utility solar parts and assemble them by hand in their backyards.“From there, we came up with this portable assembly line that we could ship out to construction sites and then feed in the entire solar system, including the steel tracks, mounting brackets, fasteners, and the solar panels,” Hunter explains. “The assembly line robotically assembles all those pieces to produce completed solar bays, which are chunks of a solar farm.”

    Charge Robotics’ machine transports an autonomously assembled portion of solar farm to its final place in a solar farm.

    Credit: Courtesy of Charge Robotics

    Previous item
    Next item

    Each bay represents a 40-foot piece of the solar farm and weighs about 800 pounds. A robotic vehicle brings it to its final location in the field. Hunter says Charge’s system automates all mechanical installation except for the process of pile driving the first metal stakes into the ground.Charge’s assembly lines also have machine-vision systems that scan each part to ensure quality, and the systems work with the most common solar parts and panel sizes.From pilot to productWhen the founders started pitching their plans to investors and construction companies, people didn’t believe it was possible.“The initial feedback was basically, ‘This will never work,’” Hunter says. “But as soon as we took our first system out into the field and people saw it operating, they got much more excited and started believing it was real.”Since that first deployment, Charge’s team has been making its system faster and easier to operate. The company plans to set up its factories at project sites and run them in partnership with solar construction companies. The factories could even run alongside human workers.“With our system, people are operating robotic equipment remotely rather than putting in the screws themselves,” Hunter explains. “We can essentially deliver the assembled solar to customers. Their only responsibility is to deliver the materials and parts on big pallets that we feed into our system.”Hunter says multiple factories could be deployed at the same site and could also operate 24/7 to dramatically speed up projects.“We are hitting the limits of solar growth because these companies don’t have enough people,” Hunter says. “We can build much bigger sites much faster with the same number of people by just shipping out more of our factories. It’s a fundamentally new way of scaling solar energy.” More

  • in

    Developing materials for stellar performance in fusion power plants

    When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.On track to an MIT educationGrowing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.Pursuing fusion at NSEBefore she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.Sailing new pathsIt’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today. More

  • in

    Will neutrons compromise the operation of superconducting magnets in a fusion plant?

    High-temperature superconducting magnets made from REBCO, an acronym for rare earth barium copper oxide, make it possible to create an intense magnetic field that can confine the extremely hot plasma needed for fusion reactions, which combine two hydrogen atoms to form an atom of helium, releasing a neutron in the process.But some early tests suggested that neutron irradiation inside a fusion power plant might instantaneously suppress the superconducting magnets’ ability to carry current without resistance (called critical current), potentially causing a reduction in the fusion power output.Now, a series of experiments has clearly demonstrated that this instantaneous effect of neutron bombardment, known as the “beam on effect,” should not be an issue during reactor operation, thus clearing the path for projects such as the ARC fusion system being developed by MIT spinoff company Commonwealth Fusion Systems.The findings were reported in the journal Superconducting Science and Technology, in a paper by MIT graduate student Alexis Devitre and professors Michael Short, Dennis Whyte, and Zachary Hartwig, along with six others.“Nobody really knew if it would be a concern,” Short explains. He recalls looking at these early findings: “Our group thought, man, somebody should really look into this. But now, luckily, the result of the paper is: It’s conclusively not a concern.”The possible issue first arose during some initial tests of the REBCO tapes planned for use in the ARC system. “I can remember the night when we first tried the experiment,” Devitre recalls. “We were all down in the accelerator lab, in the basement. It was a big shocker because suddenly the measurement we were looking at, the critical current, just went down by 30 percent” when it was measured under radiation conditions (approximating those of the fusion system), as opposed to when it was only measured after irradiation.Before that, researchers had irradiated the REBCO tapes and then tested them afterward, Short says. “We had the idea to measure while irradiating, the way it would be when the reactor’s really on,” he says. “And then we observed this giant difference, and we thought, oh, this is a big deal. It’s a margin you’d want to know about if you’re designing a reactor.”After a series of carefully calibrated tests, it turned out the drop in critical current was not caused by the irradiation at all, but was just an effect of temperature changes brought on by the proton beam used for the irradiation experiments. This is something that would not be a factor in an actual fusion plant, Short says.“We repeated experiments ‘oh so many times’ and collected about a thousand data points,” Devitre says. They then went through a detailed statistical analysis to show that the effects were exactly the same, under conditions where the material was just heated as when it was both heated and irradiated.This excluded the possibility that the instantaneous suppression of the critical current had anything to do with the “beam on effect,” at least within the sensitivity of their tests. “Our experiments are quite sensitive,” Short says. “We can never say there’s no effect, but we can say that there’s no important effect.”To carry out these tests required building a special facility for the purpose. Only a few such facilities exist in the world. “They’re all custom builds, and without this, we wouldn’t have been able to find out the answer,” he says.The finding that this specific issue is not a concern for the design of fusion plants “illustrates the power of negative results. If you can conclusively prove that something doesn’t happen, you can stop scientists from wasting their time hunting for something that doesn’t exist.” And in this case, Short says, “You can tell the fusion companies: ‘You might have thought this effect would be real, but we’ve proven that it’s not, and you can ignore it in your designs.’ So that’s one more risk retired.”That could be a relief to not only Commonwealth Fusion Systems but also several other companies that are also pursuing fusion plant designs, Devitre says. “There’s a bunch. And it’s not just fusion companies,” he adds. There remains the important issue of longer-term degradation of the REBCO that would occur over years or decades, which the group is presently investigating. Others are pursuing the use of these magnets for satellite thrusters and particle accelerators to study subatomic physics, where the effect could also have been a concern. For all these uses, “this is now one less thing to be concerned about,” Devitre says.The research team also included David Fischer, Kevin Woller, Maxwell Rae, Lauryn Kortman, and Zoe Fisher at MIT, and N. Riva at Proxima Fusion in Germany. This research was supported by Eni S.p.A. through the MIT Energy Initiative. More

  • in

    Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

    There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.Power boostThe team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.Attack modeIn their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative. More

  • in

    Chip-based system for terahertz waves could enable more efficient, sensitive electronics

    The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagel, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.Making wavesTerahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.But a property known as the dielectric constant gets in the way of a smooth transmission.The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal. The MIT researchers approached this problem differently.They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.A scalable approachThey chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program. More

  • in

    Reducing carbon emissions from residential heating: A pathway forward

    In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.A two-part modeling approachTo analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.The New England case study: A challenge for electrificationAs a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.Lessons learnedReplacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole. More

  • in

    Unlocking the secrets of fusion’s core with AI-enhanced simulations

    Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.The biggest and best of what’s never been builtForty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.“Just dropped in to see what condition my condition was in”Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”  More

  • in

    Creating smart buildings with privacy-first sensors

    Gaining a better understanding of how people move through the spaces where they live and work could make those spaces safer and more sustainable. But no one wants cameras watching them 24/7.Two former Media Lab researchers think they have a solution. Their company, Butlr, offers places like skilled nursing facilities, offices, and senior living communities a way to understand how people are using buildings without compromising privacy. Butlr uses low-resolution thermal sensors and an analytics platform to help detect falls in elderly populations, save energy, and optimize spaces for work.“We have this vision of using the right technology to understand people’s movements and behaviors in space,” says Jiani Zeng SM ’20, who co-founded Butlr with former Media Lab research affiliate Honghao Deng. “So many resources today go toward cameras and AI that take away people’s privacy. We believe we can make our environments safer, healthier, and more sustainable without violating privacy.”To date, the company has sold more than 20,000 of its privacy-preserving sensors to senior living and skilled nursing facilities as well as businesses with large building footprints, including Verizon, Netflix, and Microsoft. In the future, Butlr hopes to enable more dynamic spaces that can understand and respond to the ways people use them.“Space should be like a digital user interface: It should be multi-use and responsive to your needs,” Deng says. “If the office has a big room with people working individually, it should automatically separate into smaller rooms, or lights and temperature should be adjusted to save energy.”Building intelligence, with privacyAs an undergraduate at Tianjin University in China, Deng joined the Media Lab’s City Science Group as a visiting student in 2016. He went on to complete his master’s at Harvard University, but he returned to the Media Lab as a research affiliate and led projects around what he calls responsive architecture: spaces that can understand their users’ needs through non-camera sensors.“My vision of the future of building environments emerged from the Media Lab,” Deng says. “The real world is the largest user interface around us — it’s not the screens. We all live in a three-dimensional world and yet, unlike the digital world, this user interface doesn’t yet understand our needs, let alone the critical situations when someone falls in a room. That could be life-saving.”Zeng came to MIT as a master’s student in the Integrated Design and Management program, which was run jointly out of the MIT Sloan School of Management and the School of Engineering. She also worked as a research assistant at the Media Lab and the Computer Science and Artificial Intelligence Lab (CSAIL).The pair met during a hackathon at the Media Lab and continued collaborating on various projects. During that time, they worked with MIT’s Venture Mentoring Service (VMS) and the MIT I-Corps Program. When they graduated in 2019, they decided to start a company based on the idea of creating smart buildings with privacy-preserving sensors. Crucial early funding came from the Media Lab-affiliated E14 Fund.“I tell every single MIT founder they should have the E14 Fund in their cap table,” Deng says. “They understand what it takes to go from an MIT student to a founder, and to transition from the ‘scientist brain’ to the ‘inventor brain.’ We wouldn’t be where we are today without MIT.”Ray Stata ’57, SM ’58, the founder of Analog Devices, is also an investor in Butlr and serves as Butlr’s board director.“We would love to give back to the MIT community once we become successful entrepreneurs like Ray, whose advice and mentoring has been invaluable,” Deng says.After launching, the founders had to find the right early customers for their real-time sensors, which can discern rough body shapes but no personally identifiable information. They interviewed hundreds of people before starting with owners of office spaces.“People have zero baseline data on what’s happening in their workplace,” Deng says. “That’s especially true since the Covid-19 pandemic made people hybrid, which has opened huge opportunities to cut the energy use of large office spaces. Sometimes, the only people in these buildings are the receptionist and the cleaner.”Butlr’s multiyear, battery-powered sensors can track daily occupancy in each room and give other insights into space utilization that can be used to reduce energy use. For companies with a lot of office space, the opportunities are immense. One Butlr customer has 40 building leases. Deng says optimizing the HVAC controls based on usage could amount to millions of dollars saved.“We can be like the Google Analytics for these spaces without any concerns in terms of privacy,” Deng says.The founders also knew the problem went well beyond office spaces.“In skilled nursing facilities, instead of office spaces it’s individual rooms, all with people who may need the nurse’s help,” Deng says. “But the nurses have no visibility into what’s happening unless they physically enter the room.”Acute care environments and senior living facilities are another key market for Butlr. The company’s platform can detect falls and instances when someone isn’t getting out of bed to alert staff. The system integrates with nurse calling systems to alert staff when something is wrong.The “nerve cells” of the buildingButlr is continuing to develop analytics that give important insights into spaces. For instance, today the platform can use information around movement in elderly populations to help detect problems like urinary tract infections. Butlr also recently started a collaboration with Harvard Medical School’s Beth Israel Deaconess Medical Center and the University of Massachusetts at Amherst’s Artificial Intelligence and Technology Center for Connected Care in Aging and Alzheimer’s Disease. Through the project, Butlr will try to detect changes in movement that could indicate declining cognitive or physical abilities. Those insights could be used to provide aging patients with more supervision.“In the near term we are preventing falls, but the vision is when you look up in any buildings or homes, you’ll see Butlr,” Deng says. “This could allow older adults to age in place with dignity and privacy.”More broadly, Butlr’s founders see their work as an important way to shape the future of AI technology, which is expected to be a growing part of everyone’s lives.“We’re the nerve cells in the building, not the eyes,” Deng says. “That’s the future of AI we believe in: AI that can transform regular rooms into spaces that understand people and can use that understanding to do everything from making efficiency improvements to saving lives in senior care communities. That’s the right way to use this powerful technology.” More