More stories

  • in

    MIT engineers create an energy-storing supercapacitor from ancient materials

    Two of humanity’s most ubiquitous historical materials, cement and carbon black (which resembles very fine charcoal), may form the basis for a novel, low-cost energy storage system, according to a new study. The technology could facilitate the use of renewable energy sources such as solar, wind, and tidal power by allowing energy networks to remain stable despite fluctuations in renewable energy supply.

    The two materials, the researchers found, can be combined with water to make a supercapacitor — an alternative to batteries — that could provide storage of electrical energy. As an example, the MIT researchers who developed the system say that their supercapacitor could eventually be incorporated into the concrete foundation of a house, where it could store a full day’s worth of energy while adding little (or no) to the cost of the foundation and still providing the needed structural strength. The researchers also envision a concrete roadway that could provide contactless recharging for electric cars as they travel over that road.

    The simple but innovative technology is described this week in the journal PNAS, in a paper by MIT professors Franz-Josef Ulm, Admir Masic, and Yang-Shao Horn, and four others at MIT and at the Wyss Institute for Biologically Inspired Engineering.

    Capacitors are in principle very simple devices, consisting of two electrically conductive plates immersed in an electrolyte and separated by a membrane. When a voltage is applied across the capacitor, positively charged ions from the electrolyte accumulate on the negatively charged plate, while the positively charged plate accumulates negatively charged ions. Since the membrane in between the plates blocks charged ions from migrating across, this separation of charges creates an electric field between the plates, and the capacitor becomes charged. The two plates can maintain this pair of charges for a long time and then deliver them very quickly when needed. Supercapacitors are simply capacitors that can store exceptionally large charges.

    The amount of power a capacitor can store depends on the total surface area of its conductive plates. The key to the new supercapacitors developed by this team comes from a method of producing a cement-based material with an extremely high internal surface area due to a dense, interconnected network of conductive material within its bulk volume. The researchers achieved this by introducing carbon black — which is highly conductive — into a concrete mixture along with cement powder and water, and letting it cure. The water naturally forms a branching network of openings within the structure as it reacts with cement, and the carbon migrates into these spaces to make wire-like structures within the hardened cement. These structures have a fractal-like structure, with larger branches sprouting smaller branches, and those sprouting even smaller branchlets, and so on, ending up with an extremely large surface area within the confines of a relatively small volume. The material is then soaked in a standard electrolyte material, such as potassium chloride, a kind of salt, which provides the charged particles that accumulate on the carbon structures. Two electrodes made of this material, separated by a thin space or an insulating layer, form a very powerful supercapacitor, the researchers found.

    The two plates of the capacitor function just like the two poles of a rechargeable battery of equivalent voltage: When connected to a source of electricity, as with a battery, energy gets stored in the plates, and then when connected to a load, the electrical current flows back out to provide power.

    “The material is fascinating,” Masic says, “because you have the most-used manmade material in the world, cement, that is combined with carbon black, that is a well-known historical material — the Dead Sea Scrolls were written with it. You have these at least two-millennia-old materials that when you combine them in a specific manner you come up with a conductive nanocomposite, and that’s when things get really interesting.”

    As the mixture sets and cures, he says, “The water is systematically consumed through cement hydration reactions, and this hydration fundamentally affects nanoparticles of carbon because they are hydrophobic (water repelling).” As the mixture evolves, “the carbon black is self-assembling into a connected conductive wire,” he says. The process is easily reproducible, with materials that are inexpensive and readily available anywhere in the world. And the amount of carbon needed is very small — as little as 3 percent by volume of the mix — to achieve a percolated carbon network, Masic says.

    Supercapacitors made of this material have great potential to aid in the world’s transition to renewable energy, Ulm says. The principal sources of emissions-free energy, wind, solar, and tidal power, all produce their output at variable times that often do not correspond to the peaks in electricity usage, so ways of storing that power are essential. “There is a huge need for big energy storage,” he says, and existing batteries are too expensive and mostly rely on materials such as lithium, whose supply is limited, so cheaper alternatives are badly needed. “That’s where our technology is extremely promising, because cement is ubiquitous,” Ulm says.

    The team calculated that a block of nanocarbon-black-doped concrete that is 45 cubic meters (or yards) in size — equivalent to a cube about 3.5 meters across — would have enough capacity to store about 10 kilowatt-hours of energy, which is considered the average daily electricity usage for a household. Since the concrete would retain its strength, a house with a foundation made of this material could store a day’s worth of energy produced by solar panels or windmills and allow it to be used whenever it’s needed. And, supercapacitors can be charged and discharged much more rapidly than batteries.

    After a series of tests used to determine the most effective ratios of cement, carbon black, and water, the team demonstrated the process by making small supercapacitors, about the size of some button-cell batteries, about 1 centimeter across and 1 millimeter thick, that could each be charged to 1 volt, comparable to a 1-volt battery. They then connected three of these to demonstrate their ability to light up a 3-volt light-emitting diode (LED). Having proved the principle, they now plan to build a series of larger versions, starting with ones about the size of a typical 12-volt car battery, then working up to a 45-cubic-meter version to demonstrate its ability to store a house-worth of power.

    There is a tradeoff between the storage capacity of the material and its structural strength, they found. By adding more carbon black, the resulting supercapacitor can store more energy, but the concrete is slightly weaker, and this could be useful for applications where the concrete is not playing a structural role or where the full strength-potential of concrete is not required. For applications such as a foundation, or structural elements of the base of a wind turbine, the “sweet spot” is around 10 percent carbon black in the mix, they found.

    Another potential application for carbon-cement supercapacitors is for building concrete roadways that could store energy produced by solar panels alongside the road and then deliver that energy to electric vehicles traveling along the road using the same kind of technology used for wirelessly rechargeable phones. A related type of car-recharging system is already being developed by companies in Germany and the Netherlands, but using standard batteries for storage.

    Initial uses of the technology might be for isolated homes or buildings or shelters far from grid power, which could be powered by solar panels attached to the cement supercapacitors, the researchers say.

    Ulm says that the system is very scalable, as the energy-storage capacity is a direct function of the volume of the electrodes. “You can go from 1-millimeter-thick electrodes to 1-meter-thick electrodes, and by doing so basically you can scale the energy storage capacity from lighting an LED for a few seconds, to powering a whole house,” he says.

    Depending on the properties desired for a given application, the system could be tuned by adjusting the mixture. For a vehicle-charging road, very fast charging and discharging rates would be needed, while for powering a home “you have the whole day to charge it up,” so slower-charging material could be used, Ulm says.

    “So, it’s really a multifunctional material,” he adds. Besides its ability to store energy in the form of supercapacitors, the same kind of concrete mixture can be used as a heating system, by simply applying electricity to the carbon-laced concrete.

    Ulm sees this as “a new way of looking toward the future of concrete as part of the energy transition.”

    The research team also included postdocs Nicolas Chanut and Damian Stefaniuk at MIT’s Department of Civil and Environmental Engineering, James Weaver at the Wyss Institute, and Yunguang Zhu in MIT’s Department of Mechanical Engineering. The work was supported by the MIT Concrete Sustainability Hub, with sponsorship by the Concrete Advancement Foundation. More

  • in

    3 Questions: What’s it like winning the MIT $100K Entrepreneurship Competition?

    Solar power plays a major role in nearly every roadmap for global decarbonization. But solar panels are large, heavy, and expensive, which limits their deployment. But what if solar panels looked more like a yoga mat?

    Such a technology could be transported in a roll, carried to the top of a building, and rolled out across the roof in a matter of minutes, slashing installation costs and dramatically expanding the places where rooftop solar makes sense.

    That was the vision laid out by the MIT spinout Active Surfaces as part of the winning pitch at this year’s MIT $100K Entrepreneurship Competition, which took place May 15. The company is leveraging materials science and manufacturing innovations from labs across MIT to make ultra-thin, lightweight, and durable solar a reality.

    The $100K is one of MIT’s most visible entrepreneurship competitions, and past winners say the prize money is only part of the benefit that winning brings to a burgeoning new company. MIT News sat down with Active Surface founders Shiv Bhakta, a graduate student in MIT’s Leaders for Global Operations dual-degree program within the MIT Sloan School of Management and Department of Civil and Environmental Engineering, and Richard Swartwout SM ’18 PhD ’21, an electrical engineering and computer science graduate and former Research Laboratory of Electronics postdoc and MIT.nano innovation fellow, to learn what the last couple of months have been like since they won.

    Q: What is Active Surfaces’ solution, and what is its potential?

    Bhakta: We’re commercializing an ultrathin film, flexible solar technology. Solar is one of the most broadly distributed resources in the world, but access is limited today. It’s heavy — it weighs 50 to 60 pounds a panel — it requires large teams to move around, and the form factor can only be deployed in specific environments.

    Our approach is to develop a solar technology for the built environment. In a nutshell, we can create flexible solar panels that are as thin as paper, just as efficient as traditional panels, and at unprecedented cost floors, all while being applied to any surface. Same area, same power. That’s our motto.

    When I came to MIT, my north star was to dive deeper in my climate journey and help make the world a better, greener place. Now, as we build Active Surfaces, I’m excited to see that dream taking shape. The prospect of transforming any surface into an energy source, thereby expanding solar accessibility globally, holds the promise of significantly reducing CO2 emissions at a gigaton scale. That’s what gets me out of bed in the morning.

    Swartwout: Solar and a lot of other renewables tend to be pretty land-inefficient. Solar 1.0 is using low hanging fruit: cheap land next to easy interconnects and new buildings designed to handle the weight of current panels. But as we ramp up solar, those things will run out. We need to utilize spaces and assets better. That’s what I think solar 2.0 will be: urban PV deployments, solar that’s closer to demand, and integrated into the built environment. These next-generation use cases aren’t just a racking system in the middle of nowhere.

    We’re going after commercial roofs, which would cover most [building] energy demand. Something like 80-90 percent of building electricity demands in the space can be met by rooftop solar.

    The goal is to do the manufacturing in-house. We use roll-to-roll manufacturing, so we can buy tons of equipment off the shelf, but most roll-to-roll manufacturing is made for things like labeling and tape, and not a semiconductor, so our plan is to be the core of semiconductor roll-to-roll manufacturing. There’s never been roll-to-roll semiconductor manufacturing before.

    Q: What have the last few months been like since you won the $100K competition?

    Bhakta: After winning the $100K, we’ve gotten a lot of inbound contact from MIT alumni. I think that’s my favorite part about the MIT community — people stay connected. They’ve been congratulating us, asking to chat, looking to partner, deploy, and invest.

    We’ve also gotten contacted by previous $100K competition winners and other startups that have spun out of MIT that are a year or two or three ahead of us in terms of development. There are a lot of startup scaling challenges that other startup founders are best equipped to answer, and it’s been huge to get guidance from them.

    We’ve also gotten into top accelerators like Cleantech Open, Venture For Climatetech, and ACCEL at Greentown Labs. We also onboarded two rockstar MIT Sloan interns for the summer. Now we’re getting to the product-development phase, building relationships with potential pilot partners, and scaling up the area of our technology.      

    Swartwout: Winning the $100K competition was a great point of validation for the company, because the judges themselves are well known in the venture capital community as well as people who have been in the startup ecosystem for a long time, so that has really propelled us forward. Ideally, we’ll be getting more MIT alumni to join us to fulfill this mission.

    Q: What are your plans for the next year or so?

    Swartwout: We’re planning on leveraging open-access facilities like those at MIT.nano and the University of Massachusetts Amherst. We’re pretty focused now on scaling size. Out of the lab, [the technology] is a 4-inch by 4-inch solar module, and the goal is to get up to something that’s relevant for the industry to offset electricity for building owners and generate electricity for the grid at a reasonable cost.

    Bhakta: In the next year, through those open-access facilities, the goal is to go from 100-millimeter width to 300-millimeter width and a very long length using a roll-to-roll manufacturing process. That means getting through the engineering challenges of scaling technology and fine tuning the performance.

    When we’re ready to deliver a pilotable product, it’s my job to have customers lined up ready to demonstrate this works on their buildings, sign longer term contracts to get early revenue, and have the support we need to demonstrate this at scale. That’s the goal. More

  • in

    Addressing food insecurity in arid regions with an open-source evaporative cooling chamber design

    Anyone who has ever perspired on a hot summer day understands the principle — and critical value — of evaporative cooling. Our bodies produce droplets of sweat when we overheat, and with a dry breeze or nearby fan those droplets will evaporate, absorbing heat in the process creating a welcome cool feeling.

    That same scientific principle, known as evaporative cooling, can be a game-changer for preserving fruits and vegetables grown on smallholder farms, where the wilting dry heat can quickly degrade freshly harvested produce. If those just-picked red peppers and leafy greens are not consumed in short order, or quickly transferred to cold — or at least cool — storage, much of it can go to waste.

    Now, MIT Professor Leon Glicksman of the Building Technology Program within the Department of Architecture, and Research Engineer Eric Verploegen of MIT D-Lab have released their open-source design for a forced-air evaporative cooling chamber that can be built in a used shipping container and powered by either grid electricity or built-in solar panels. With a capacity of 168 produce crates, the chamber offers great promise for smallholder farmers in hot, dry climates who need an affordable method for quickly bringing down the temperature of freshly harvested fruit and vegetables to ensure they stay fresh.

    “Delicate fruits and vegetables are most vulnerable to spoilage if they are picked during the day,” says Verploegen, a longtime proponent of using evaporative cooling to reduce post-harvest waste. “And if refrigerated cold rooms aren’t feasible or affordable,” he continues, “evaporative cooling can make a big difference for farmers and the communities they feed.”

    Verploegen has made evaporative cooling the focus of his work since 2016, initially focusing on small-scale evaporative cooling “Zeer” pots, typically with a capacity between 10 and 100 liters and great for household use, as well as larger double-brick-walled chambers known as zero-energy cooling chambers or ZECCs, which can store between six and 16 vegetable crates at a time. These designs rely on passive airflow. The newly released design for the forced-air evaporative cooling chamber is differentiated from these two more modest designs by the active airflow system, as well as by significantly larger capacity.

    In 2019, Verploegen turned his attention to the idea of building a larger evaporative cooling room and joined forces with Glicksman to explore using forced, instead of passive, airflow to cool fruit and vegetables. After studying existing cold storage options and conducting user research with farmers in Kenya, they came up with the idea to use active evaporative cooling with a used shipping container as the structure of the chamber. As the Covid-19 pandemic was ramping up in 2020, they procured a used 10-foot shipping container, installed it in the courtyard area outside D-Lab near Village Street, and went to work on a prototype of the forced-air evaporative cooling chamber.

    Here’s how it works: Industrial fans draw hot, dry air into the chamber, which is passed through a porous wet pad. The resulting cool and humid air is then forced through the crates of fruits and vegetables stored inside the chamber. The air is then directed through the raised floor and to a channel between the insulation and the exterior container wall, where it flows to the exhaust holes near the top of the side walls.

    Leon Glicksman, a professor of building technology and mechanical engineering, drew on his previous research in natural ventilation and airflow in buildings to come up with the vertical forced-air design pattern for the chamber. “The key to the design is the close control of the airflow strength, and its direction,” he says. “The strength of the airflow passing directly through the crates of fruits and vegetables, and the airflow pathway itself, are what makes this system work so well. The design promotes rapid cooling of a harvest taken directly from the field.”

    In addition to the novel and effective airflow system, the forced-air evaporative cooling chamber represents so much of what D-Lab is known for in its work in low-resourced and off-grid communities: developing low-cost and low-carbon-footprint technologies with partners. Evaporative cooling is no different. Whether connected to the electrical grid or run from solar panels, the forced-air chamber consumes one-quarter the power of refrigerated cold rooms. And, as the chamber is designed to be built in a used shipping container — ubiquitous the world over — the project is a great example of up-cycling.

    Piloting the design

    As with earlier investigations, Verploegen, Glicksman, and their colleagues have worked closely with farmers and community members. For the forced-air system, the team engaged with community partners who are living the need for better cooling and storage conditions for their produce in the climate conditions where evaporative cooling works best. Two partners, one in Kenya and one in India, each built a pilot chamber, testing and informing the process alongside the work being done at MIT.

    In Kenya, where smallholder farms produce 63 percent of total food consumed and over 50 percent of smallholder produce is lost post-harvest, they worked with Solar Freeze, a cold storage company located in in Kibwezi, Kenya. Solar Freeze, whose founder Dysmus Kisilu was a 2019 MIT D-Lab Scale-Ups Fellow, built an off-grid forced-air evaporative cooling chamber at a produce market between Nairobi and Mombasa at a cost of $15,000, powered by solar photovoltaic panels. “The chamber is offering a safety net against huge post-harvest losses previously experienced by local smallholder farmers,” comments Peter Mumo, an entrepreneur and local politician who oversaw the construction of the Solar Freeze chamber in Makuni County, Kenya.

    As much as 30 percent of fruits and vegetables produced in India are wasted each year due to insufficient cold storage capacity, lack of cold storage close to farms, poor transportation infrastructure, and other gaps in the cold chain. Although the climate varies across the subcontinent, the hot desert climate there, such as in Bhuj where the Hunnarshala Foundation is headquartered, is perfect for evaporative cooling. Hunnarshala signed on to build an on-grid system for $8,100, which they located at an organic farm near Bhuj. “We have really encouraging results,” says Mahavir Acharya, executive director of Hunnarshala Foundation. “In peak summer, when the temperature is 42 [Celsius] we are able to get to 26 degrees [Celsius] inside and 95 percent humidity, which is really good conditions for vegetables to remain fresh for three, four, five, six days. In winter we tested [and saw temperatures reduced from] 35 degrees to 24 degrees [Celsius], and for seven days the quality was quite good.”

    Getting the word out

    With the concept validated and pilots well established, the next step is spreading the word.

    “We’re continuing to test and optimize the system, both in Kenya and India, as well as our test chambers here at MIT,” says Verploegen. “We will continue piloting with users and deploying with farmers and vendors, gathering data on the thermal performance, the shelf life of fruits and vegetables in the chamber, and how using the technology impacts the users. And, we’re also looking to engage with cold storage providers who might want to build this or others in the horticulture value chain such as farmer cooperatives, individual farmers, and local governments.”

    To reach the widest number of potential users, Verploegen and the team chose not to pursue a patent and instead set up a website to disseminate the open-source design with detailed guidance on how to build a forced-air evaporative cooling chamber. In addition to the extensive printed documentation, well-illustrated with detailed CAD drawings and video, the team has created instructional videos.

    As co-principal investigator in the early stages of the project, MIT professor of mechanical engineering Dan Frey contributed to the market research phase of the project and the initial conception of chamber design. “These forced-air evaporative cooling chambers have great potential, and the open-source approach is an excellent choice for this project,” says Frey. “The design’s release is a significant milestone on the path to positive impacts.”

    The forced-air evaporative cooling chamber research and design have been supported by the Abdul Latif Jameel Water and Food Systems Lab through an India Grant, Seed Grant, and a Solutions Grant. More

  • in

    Cutting urban carbon emissions by retrofitting buildings

    To support the worldwide struggle to reduce carbon emissions, many cities have made public pledges to cut their carbon emissions in half by 2030, and some have promised to be carbon neutral by 2050. Buildings can be responsible for more than half a municipality’s carbon emissions. Today, new buildings are typically designed in ways that minimize energy use and carbon emissions. So attention focuses on cleaning up existing buildings.

    A decade ago, leaders in some cities took the first step in that process: They quantified their problem. Based on data from their utilities on natural gas and electricity consumption and standard pollutant-emission rates, they calculated how much carbon came from their buildings. They then adopted policies to encourage retrofits, such as adding insulation, switching to double-glazed windows, or installing rooftop solar panels. But will those steps be enough to meet their pledges?

    “In nearly all cases, cities have no clear plan for how they’re going to reach their goal,” says Christoph Reinhart, a professor in the Department of Architecture and director of the Building Technology Program. “That’s where our work comes in. We aim to help them perform analyses so they can say, ‘If we, as a community, do A, B, and C to buildings of a certain type within our jurisdiction, then we are going to get there.’”

    To support those analyses, Reinhart and a team in the MIT Sustainable Design Lab (SDL) — PhD candidate Zachary M. Berzolla SM ’21; former doctoral student Yu Qian Ang PhD ’22, now a research collaborator at the SDL; and former postdoc Samuel Letellier-Duchesne, now a senior building performance analyst at the international building engineering and consulting firm Introba — launched a publicly accessible website providing a series of simulation tools and a process for using them to determine the impacts of planned steps on a specific building stock. Says Reinhart: “The takeaway can be a clear technology pathway — a combination of building upgrades, renewable energy deployments, and other measures that will enable a community to reach its carbon-reduction goals for their built environment.”

    Analyses performed in collaboration with policymakers from selected cities around the world yielded insights demonstrating that reaching current goals will require more effort than city representatives and — in a few cases — even the research team had anticipated.

    Exploring carbon-reduction pathways

    The researchers’ approach builds on a physics-based “building energy model,” or BEM, akin to those that architects use to design high-performance green buildings. In 2013, Reinhart and his team developed a method of extending that concept to analyze a cluster of buildings. Based on publicly available geographic information system (GIS) data, including each building’s type, footprint, and year of construction, the method defines the neighborhood — including trees, parks, and so on — and then, using meteorological data, how the buildings will interact, the airflows among them, and their energy use. The result is an “urban building energy model,” or UBEM, for a neighborhood or a whole city.

    The website developed by the MIT team enables neighborhoods and cities to develop their own UBEM and to use it to calculate their current building energy use and resulting carbon emissions, and then how those outcomes would change assuming different retrofit programs or other measures being implemented or considered. “The website — UBEM.io — provides step-by-step instructions and all the simulation tools that a team will need to perform an analysis,” says Reinhart.

    The website starts by describing three roles required to perform an analysis: a local sustainability champion who is familiar with the municipality’s carbon-reduction efforts; a GIS manager who has access to the municipality’s urban datasets and maintains a digital model of the built environment; and an energy modeler — typically a hired consultant — who has a background in green building consulting and individual building energy modeling.

    The team begins by defining “shallow” and “deep” building retrofit scenarios. To explain, Reinhart offers some examples: “‘Shallow’ refers to things that just happen, like when you replace your old, failing appliances with new, energy-efficient ones, or you install LED light bulbs and weatherstripping everywhere,” he says. “‘Deep’ adds to that list things you might do only every 20 years, such as ripping out walls and putting in insulation or replacing your gas furnace with an electric heat pump.”

    Once those scenarios are defined, the GIS manager uploads to UBEM.io a dataset of information about the city’s buildings, including their locations and attributes such as geometry, height, age, and use (e.g., commercial, retail, residential). The energy modeler then builds a UBEM to calculate the energy use and carbon emissions of the existing building stock. Once that baseline is established, the energy modeler can calculate how specific retrofit measures will change the outcomes.

    Workshop to test-drive the method

    Two years ago, the MIT team set up a three-day workshop to test the website with sample users. Participants included policymakers from eight cities and municipalities around the world: namely, Braga (Portugal), Cairo (Egypt), Dublin (Ireland), Florianopolis (Brazil), Kiel (Germany), Middlebury (Vermont, United States), Montreal (Canada), and Singapore. Taken together, the cities represent a wide range of climates, socioeconomic demographics, cultures, governing structures, and sizes.

    Working with the MIT team, the participants presented their goals, defined shallow- and deep-retrofit scenarios for their city, and selected a limited but representative area for analysis — an approach that would speed up analyses of different options while also generating results valid for the city as a whole.

    They then performed analyses to quantify the impacts of their retrofit scenarios. Finally, they learned how best to present their findings — a critical part of the exercise. “When you do this analysis and bring it back to the people, you can say, ‘This is our homework over the next 30 years. If we do this, we’re going to get there,’” says Reinhart. “That makes you part of the community, so it’s a joint goal.”

    Sample results

    After the close of the workshop, Reinhart and his team confirmed their findings for each city and then added one more factor to the analyses: the state of the city’s electric grid. Several cities in the study had pledged to make their grid carbon-neutral by 2050. Including the grid in the analysis was therefore critical: If a building becomes all-electric and purchases its electricity from a carbon-free grid, then that building will be carbon neutral — even with no on-site energy-saving retrofits.

    The final analysis for each city therefore calculated the total kilograms of carbon dioxide equivalent emitted per square meter of floor space assuming the following scenarios: the baseline; shallow retrofit only; shallow retrofit plus a clean electricity grid; deep retrofit only; deep retrofit plus rooftop photovoltaic solar panels; and deep retrofit plus a clean electricity grid. (Note that “clean electricity grid” is based on the area’s most ambitious decarbonization target for their power grid.)

    The following paragraphs provide highlights of the analyses for three of the eight cities. Included are the city’s setting, emission-reduction goals, current and proposed measures, and calculations of how implementation of those measures would affect their energy use and carbon emissions.

    Singapore

    Singapore is generally hot and humid, and its building energy use is largely in the form of electricity for cooling. The city is dominated by high-rise buildings, so there’s not much space for rooftop solar installations to generate the needed electricity. Therefore, plans for decarbonizing the current building stock must involve retrofits. The shallow-retrofit scenario focuses on installing energy-efficient lighting and appliances. To those steps, the deep-retrofit scenario adds adopting a district cooling system. Singapore’s stated goals are to cut the baseline carbon emissions by about a third by 2030 and to cut it in half by 2050.

    The analysis shows that, with just the shallow retrofits, Singapore won’t achieve its 2030 goal. But with the deep retrofits, it should come close. Notably, decarbonizing the electric grid would enable Singapore to meet and substantially exceed its 2050 target assuming either retrofit scenario.

    Dublin

    Dublin has a mild climate with relatively comfortable summers but cold, humid winters. As a result, the city’s energy use is dominated by fossil fuels, in particular, natural gas for space heating and domestic hot water. The city presented just one target — a 40 percent reduction by 2030.

    Dublin has many neighborhoods made up of Georgian row houses, and, at the time of the workshop, the city already had a program in place encouraging groups of owners to insulate their walls. The shallow-retrofit scenario therefore focuses on weatherization upgrades (adding weatherstripping to windows and doors, insulating crawlspaces, and so on). To that list, the deep-retrofit scenario adds insulating walls and installing upgraded windows. The participants didn’t include electric heat pumps, as the city was then assessing the feasibility of expanding the existing district heating system.

    Results of the analyses show that implementing the shallow-retrofit scenario won’t enable Dublin to meet its 2030 target. But the deep-retrofit scenario will. However, like Singapore, Dublin could make major gains by decarbonizing its electric grid. The analysis shows that a decarbonized grid — with or without the addition of rooftop solar panels where possible — could more than halve the carbon emissions that remain in the deep-retrofit scenario. Indeed, a decarbonized grid plus electrification of the heating system by incorporating heat pumps could enable Dublin to meet a future net-zero target.

    Middlebury

    Middlebury, Vermont, has warm, wet summers and frigid winters. Like Dublin, its energy demand is dominated by natural gas for heating. But unlike Dublin, it already has a largely decarbonized electric grid with a high penetration of renewables.

    For the analysis, the Middlebury team chose to focus on an aging residential neighborhood similar to many that surround the city core. The shallow-retrofit scenario calls for installing heat pumps for space heating, and the deep-retrofit scenario adds improvements in building envelopes (the façade, roof, and windows). The town’s targets are a 40 percent reduction from the baseline by 2030 and net-zero carbon by 2050.

    Results of the analyses showed that implementing the shallow-retrofit scenario won’t achieve the 2030 target. The deep-retrofit scenario would get the city to the 2030 target but not to the 2050 target. Indeed, even with the deep retrofits, fossil fuel use remains high. The explanation? While both retrofit scenarios call for installing heat pumps for space heating, the city would continue to use natural gas to heat its hot water.

    Lessons learned

    For several policymakers, seeing the results of their analyses was a wake-up call. They learned that the strategies they had planned might not be sufficient to meet their stated goals — an outcome that could prove publicly embarrassing for them in the future.

    Like the policymakers, the researchers learned from the experience. Reinhart notes three main takeaways.

    First, he and his team were surprised to find how much of a building’s energy use and carbon emissions can be traced to domestic hot water. With Middlebury, for example, even switching from natural gas to heat pumps for space heating didn’t yield the expected effect: On the bar graphs generated by their analyses, the gray bars indicating carbon from fossil fuel use remained. As Reinhart recalls, “I kept saying, ‘What’s all this gray?’” While the policymakers talked about using heat pumps, they were still going to use natural gas to heat their hot water. “It’s just stunning that hot water is such a big-ticket item. It’s huge,” says Reinhart.

    Second, the results demonstrate the importance of including the state of the local electric grid in this type of analysis. “Looking at the results, it’s clear that if we want to have a successful energy transition, the building sector and the electric grid sector both have to do their homework,” notes Reinhart. Moreover, in many cases, reaching carbon neutrality by 2050 would require not only a carbon-free grid but also all-electric buildings.

    Third, Reinhart was struck by how different the bar graphs presenting results for the eight cities look. “This really celebrates the uniqueness of different parts of the world,” he says. “The physics used in the analysis is the same everywhere, but differences in the climate, the building stock, construction practices, electric grids, and other factors make the consequences of making the same change vary widely.”

    In addition, says Reinhart, “there are sometimes deeply ingrained conflicts of interest and cultural norms, which is why you cannot just say everybody should do this and do this.” For instance, in one case, the city owned both the utility and the natural gas it burned. As a result, the policymakers didn’t consider putting in heat pumps because “the natural gas was a significant source of municipal income, and they didn’t want to give that up,” explains Reinhart.

    Finally, the analyses quantified two other important measures: energy use and “peak load,” which is the maximum electricity demanded from the grid over a specific time period. Reinhart says that energy use “is probably mostly a plausibility check. Does this make sense?” And peak load is important because the utilities need to keep a stable grid.

    Middlebury’s analysis provides an interesting look at how certain measures could influence peak electricity demand. There, the introduction of electric heat pumps for space heating more than doubles the peak demand from buildings, suggesting that substantial additional capacity would have to be added to the grid in that region. But when heat pumps are combined with other retrofitting measures, the peak demand drops to levels lower than the starting baseline.

    The aftermath: An update

    Reinhart stresses that the specific results from the workshop provide just a snapshot in time; that is, where the cities were at the time of the workshop. “This is not the fate of the city,” he says. “If we were to do the same exercise today, we’d no doubt see a change in thinking, and the outcomes would be different.”

    For example, heat pumps are now familiar technology and have demonstrated their ability to handle even bitterly cold climates. And in some regions, they’ve become economically attractive, as the war in Ukraine has made natural gas both scarce and expensive. Also, there’s now awareness of the need to deal with hot water production.

    Reinhart notes that performing the analyses at the workshop did have the intended impact: It brought about change. Two years after the project had ended, most of the cities reported that they had implemented new policy measures or had expanded their analysis across their entire building stock. “That’s exactly what we want,” comments Reinhart. “This is not an academic exercise. It’s meant to change what people focus on and what they do.”

    Designing policies with socioeconomics in mind

    Reinhart notes a key limitation of the UBEM.io approach: It looks only at technical feasibility. But will the building owners be willing and able to make the energy-saving retrofits? Data show that — even with today’s incentive programs and subsidies — current adoption rates are only about 1 percent. “That’s way too low to enable a city to achieve its emission-reduction goals in 30 years,” says Reinhart. “We need to take into account the socioeconomic realities of the residents to design policies that are both effective and equitable.”

    To that end, the MIT team extended their UBEM.io approach to create a socio-techno-economic analysis framework that can predict the rate of retrofit adoption throughout a city. Based on census data, the framework creates a UBEM that includes demographics for the specific types of buildings in a city. Accounting for the cost of making a specific retrofit plus financial benefits from policy incentives and future energy savings, the model determines the economic viability of the retrofit package for representative households.

    Sample analyses for two Boston neighborhoods suggest that high-income households are largely ineligible for need-based incentives or the incentives are insufficient to prompt action. Lower-income households are eligible and could benefit financially over time, but they don’t act, perhaps due to limited access to information, a lack of time or capital, or a variety of other reasons.

    Reinhart notes that their work thus far “is mainly looking at technical feasibility. Next steps are to better understand occupants’ willingness to pay, and then to determine what set of federal and local incentive programs will trigger households across the demographic spectrum to retrofit their apartments and houses, helping the worldwide effort to reduce carbon emissions.”

    This work was supported by Shell through the MIT Energy Initiative. Zachary Berzolla was supported by the U.S. National Science Foundation Graduate Research Fellowship. Samuel Letellier-Duchesne was supported by the postdoctoral fellowship of the Natural Sciences and Engineering Research Council of Canada.

    This article appears in the Spring 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    MIT welcomes Brian Deese as its next Institute Innovation Fellow

    MIT has appointed former White House National Economic Council (NEC) director Brian Deese as an MIT Innovation Fellow, focusing on the impact of economic policies that strengthen the United States’ industrial capacity and on accelerating climate investment and innovation. Deese will begin his appointment this summer. 

    “From climate change to U.S. industrial strategy, the people of MIT strive to make serious positive change at scale — and in Brian Deese, we have found a brilliant ally, guide, and inspiration,“ says MIT President Sally Kornbluth. “He pairs an easy command of technological questions with a rare grasp of contemporary policy and the politics it takes for such policies to succeed. We are extremely fortunate to have Brian with us for this pivotal year.” 

    Deese is an accomplished public policy innovator. As President Joe Biden’s top economic advisor, he was instrumental in shaping several pieces of legislation — the bipartisan Infrastructure Investment and Jobs Act, the CHIPS and Science Act, and the Inflation Reduction Act  — that together are expected to yield more than $3 trillion over the next decade in public and private investments in physical infrastructure, semiconductors, and clean energy, as well as a major expansion of scientific research. 

    “I was attracted to MIT by its combination of extraordinary capabilities in engineering, science, and economics, and the desire and enthusiasm to translate those capabilities into real-world outcomes,” says Deese. 

    Climate and economic policy expertise

    Deese’s public service career has spanned multiple periods of global economic crisis. He has helped shape policies ranging from clean energy infrastructure investments to addressing supply chain disruptions triggered by the pandemic and the war in Ukraine. 

    As NEC director in the Biden White House, Deese oversaw the development of domestic and international economic policy. Previously, he served as the global head of sustainable investing at BlackRock, Inc., one of the world’s leading asset management firms; before that, he held several key posts in the Obama White House, serving as the president’s top advisor on climate policy; deputy director of the Office of Management and Budget; and deputy director of the NEC. Early in the Obama Administration, Deese played a key role in developing and implementing the rescue of the U.S. auto industry during the Great Recession. Deese earned a bachelor of arts degree from Middlebury College and his JD from Yale Law School.

    Despite recent legislative progress, the world still faces daunting climate and energy challenges, including the need to reduce greenhouse gas emissions, increase energy capacity, and fill infrastructure gaps, Deese notes.

    “Our biggest challenge is our biggest opportunity,” he says. “We need to build at a speed not seen in generations.”  

    Deese is also thinking about how to effectively design and implement industrial strategy approaches that build on recent efforts to restore the U.S. semiconductor industry. What’s needed, he says, is an approach that can foster innovation and build manufacturing capacity — especially in economically disadvantaged areas of the country — while learning lessons from previous successes and failures in this field. 

    “This is a timely and important appointment because Brian has enormous experience at the top levels of government in shaping public policies for climate, technology, manufacturing, and energy, and the consequences for  shared prosperity nationally and globally — all subjects of intense interest to the MIT community,” says MIT Associate Provost Richard Lester. “I fully expect that faculty and student engagement with Brian while he is with us will help advance MIT research, innovation, and impact in these critical areas.”

    Innovation fellowship

    Previous MIT Innovation Fellows, typically in residence for a year or more, have included luminaries from industry and government, including most recently Virginia M. “Ginny” Rometty, former chair, president, and CEO of IBM; Eric Schmidt, former executive chair of Google’s parent company, Alphabet; the late Ash Carter, former U.S. secretary of defense; and former Massachusetts Governor Deval Patrick.

    During his time at MIT, Deese will work on a project detailing and mapping private investment in clean energy and other climate-related activities. He will also interact with students, staff, and faculty from across the Institute. 

    “I hope my role at MIT can largely be about forging partnerships within the Institute and outside of the Institute to significantly reduce the time between innovation and outcomes into the world,” says Deese. More

  • in

    Chemists discover why photosynthetic light-harvesting is so efficient

    When photosynthetic cells absorb light from the sun, packets of energy called photons leap between a series of light-harvesting proteins until they reach the photosynthetic reaction center. There, cells convert the energy into electrons, which eventually power the production of sugar molecules.

    This transfer of energy through the light-harvesting complex occurs with extremely high efficiency: Nearly every photon of light absorbed generates an electron, a phenomenon known as near-unity quantum efficiency.

    A new study from MIT chemists offers a potential explanation for how proteins of the light-harvesting complex, also called the antenna, achieve that high efficiency. For the first time, the researchers were able to measure the energy transfer between light-harvesting proteins, allowing them to discover that the disorganized arrangement of these proteins boosts the efficiency of the energy transduction.

    “In order for that antenna to work, you need long-distance energy transduction. Our key finding is that the disordered organization of the light-harvesting proteins enhances the efficiency of that long-distance energy transduction,” says Gabriela Schlau-Cohen, an associate professor of chemistry at MIT and the senior author of the new study.

    MIT postdocs Dihao Wang and Dvir Harris and former MIT graduate student Olivia Fiebig PhD ’22 are the lead authors of the paper, which appears this week in the Proceedings of the National Academy of Sciences. Jianshu Cao, an MIT professor of chemistry, is also an author of the paper.

    Energy capture

    For this study, the MIT team focused on purple bacteria, which are often found in oxygen-poor aquatic environments and are commonly used as a model for studies of photosynthetic light-harvesting.

    Within these cells, captured photons travel through light-harvesting complexes consisting of proteins and light-absorbing pigments such as chlorophyll. Using ultrafast spectroscopy, a technique that uses extremely short laser pulses to study events that happen on timescales of femtoseconds to nanoseconds, scientists have been able to study how energy moves within a single one of these proteins. However, studying how energy travels between these proteins has proven much more challenging because it requires positioning multiple proteins in a controlled way.

    To create an experimental setup where they could measure how energy travels between two proteins, the MIT team designed synthetic nanoscale membranes with a composition similar to those of naturally occurring cell membranes. By controlling the size of these membranes, known as nanodiscs, they were able to control the distance between two proteins embedded within the discs.

    For this study, the researchers embedded two versions of the primary light-harvesting protein found in purple bacteria, known as LH2 and LH3, into their nanodiscs. LH2 is the protein that is present during normal light conditions, and LH3 is a variant that is usually expressed only during low light conditions.

    Using the cryo-electron microscope at the MIT.nano facility, the researchers could image their membrane-embedded proteins and show that they were positioned at distances similar to those seen in the native membrane. They were also able to measure the distances between the light-harvesting proteins, which were on the scale of 2.5 to 3 nanometers.

    Disordered is better

    Because LH2 and LH3 absorb slightly different wavelengths of light, it is possible to use ultrafast spectroscopy to observe the energy transfer between them. For proteins spaced closely together, the researchers found that it takes about 6 picoseconds for a photon of energy to travel between them. For proteins farther apart, the transfer takes up to 15 picoseconds.

    Faster travel translates to more efficient energy transfer, because the longer the journey takes, the more energy is lost during the transfer.

    “When a photon gets absorbed, you only have so long before that energy gets lost through unwanted processes such as nonradiative decay, so the faster it can get converted, the more efficient it will be,” Schlau-Cohen says.

    The researchers also found that proteins arranged in a lattice structure showed less efficient energy transfer than proteins that were arranged in randomly organized structures, as they usually are in living cells.

    “Ordered organization is actually less efficient than the disordered organization of biology, which we think is really interesting because biology tends to be disordered. This finding tells us that that may not just be an inevitable downside of biology, but organisms may have evolved to take advantage of it,” Schlau-Cohen says.

    Now that they have established the ability to measure inter-protein energy transfer, the researchers plan to explore energy transfer between other proteins, such as the transfer between proteins of the antenna to proteins of the reaction center. They also plan to study energy transfer between antenna proteins found in organisms other than purple bacteria, such as green plants.

    The research was funded primarily by the U.S. Department of Energy. More

  • in

    Panel addresses technologies needed for a net-zero future

    Five speakers at a recent public panel discussion hosted by the MIT Energy Initiative (MITEI) and introduced by Deputy Director for Science and Technology Robert Stoner tackled one of the thorniest, yet most critical, questions facing the world today: How can we achieve the ambitious goals set by governments around the globe, including the United States, to reach net zero emissions of greenhouse gases by mid-century?

    While the challenges are great, the panelists agreed, there is reason for optimism that these technological challenges can be solved. More uncertain, some suggested, are the social, economic, and political hurdles to bringing about the needed innovations.

    The speakers addressed areas where new or improved technologies or systems are needed if these ambitious goals are to be achieved. Anne White, aassociate provost and associate vice president for research administration and a professor of nuclear science and engineering at MIT, moderated the panel discussion. She said that achieving the ambitious net-zero goal “has to be accomplished by filling some gaps, and going after some opportunities.” In addressing some of these needs, she said the five topics chosen for the panel discussion were “places where MIT has significant expertise, and progress is already ongoing.”

    First of these was the heating and cooling of buildings. Christoph Reinhart, a professor of architecture and director of the Building Technology Program, said that currently about 1 percent of existing buildings are being retrofitted each year for energy efficiency and conversion from fossil-fuel heating systems to efficient electric ones — but that is not nearly enough to meet the 2050 net-zero target. “It’s an enormous task,” he said. To meet the goals, he said, would require increasing the retrofitting rate to 5 percent per year, and to require all new construction to be carbon neutral as well.

    Reinhart then showed a series of examples of how such conversions could take place using existing solar and heat pump technology, and depending on the configuration, how they could provide a payback to the homeowner within 10 years or less. However, without strong policy incentives the initial cost outlay for such a system, on the order of $50,000, is likely to put conversions out of reach of many people. Still, a recent survey found that 30 percent of homeowners polled said they would accept installation at current costs. While there is government money available for incentives for others, “we have to be very clever on how we spend all this money 
 and make sure that everybody is basically benefiting.”

    William Green, a professor of chemical engineering, spoke about the daunting challenge of bringing aviation to net zero. “More and more people like to travel,” he said, but that travel comes with carbon emissions that affect the climate, as well as air pollution that affects human health. The economic costs associated with these emissions, he said, are estimated at $860 per ton of jet fuel used — which is very close to the cost of the fuel itself. So the price paid by the airlines, and ultimately by the passengers, “is only about half of the true cost to society, and the other half is being borne by all of us, by the fact that it’s affecting the climate and it’s causing medical problems for people.”

    Eliminating those emissions is a major challenge, he said. Virtually all jet fuel today is fossil fuel, but airlines are starting to incorporate some biomass-based fuel, derived mostly from food waste. But even these fuels are not carbon-neutral, he said. “They actually have pretty significant carbon intensity.”

    But there are possible alternatives, he said, mostly based on using hydrogen produced by clean electricity, and making fuels out of that hydrogen by reacting it, for example, with carbon dioxide. This could indeed produce a carbon-neutral fuel that existing aircraft could use, but the process is costly, requiring a great deal of hydrogen, and ways of concentrating carbon dioxide. Other viable options also exist, but all would add significant expense, at least with present technology. “It’s going to cost a lot more for the passengers on the plane,” Green said, “But the society will benefit from that.”

    Increased electrification of heating and transportation in order to avoid the use of fossil fuels will place major demands on the existing electric grid systems, which have to perform a constant delicate balancing of production with demand. Anuradha Annaswamy, a senior research scientist in MIT’s mechanical engineering department, said “the electric grid is an engineering marvel.” In the United States it consists of 300,000 miles of transmission lines capable of carrying 470,000 megawatts of power.

    But with a projected doubling of energy from renewable sources entering the grid by 2030, and with a push to electrify everything possible — from transportation to buildings to industry — the load is not only increasing, but the patterns of both energy use and production are changing. Annaswamy said that “with all these new assets and decision-makers entering the picture, the question is how you can use a more sophisticated information layer that coordinates how all these assets are either consuming or producing or storing energy, and have that information layer coexist with the physical layer to make and deliver electricity in all these ways. It’s really not a simple problem.”

    But there are ways of addressing these complexities. “Certainly, emerging technologies in power electronics and control and communication can be leveraged,” she said. But she added that “This is not just a technology problem, really, it is something that requires technologists, economists, and policymakers to all come together.”

    As for industrial processes, Bilge Yildiz, a professor of nuclear science and engineering and materials science and engineering, said that “the synthesis of industrial chemicals and materials constitutes about 33 percent of global CO2 emissions at present, and so our goal is to decarbonize this difficult sector.” About half of all these industrial emissions come from the production of just four materials: steel, cement, ammonia, and ethylene, so there is a major focus of research on ways to reduce their emissions.

    Most of the processes to make these materials have changed little for more than a century, she said, and they are mostly heat-based processes that involve burning a lot of fossil fuel. But the heat can instead be provided from renewable electricity, which can also be used to drive electrochemical reactions in some cases as a substitute for the thermal reactions. Already, there are processes for making cement and steel that produce only about half the present carbon dioxide (CO2) emissions.

    The production of ammonia, which is widely used in fertilizer and other bulk chemicals, accounts for more greenhouse gas emissions than any other industrial source. The present thermochemical process could be replaced by an electrochemical process, she said. Similarly, the production of ethylene, as a feedstock for plastics and other materials, is the second-highest emissions producer, with three tons of carbon dioxide released for every ton of ethylene produced. Again, an electrochemical alternative method exists, but needs to be improved to be cost competitive.

    As the world moves toward electrification of industrial processes to eliminate fossil fuels, the need for emissions-free sources of electricity will continue to increase. One very promising potential addition to the range of carbon-free generation sources is fusion, a field in which MIT is a leader in developing a particularly promising technology that takes advantage of the unique properties of high-temperature superconducting (HTS) materials.

    Dennis Whyte, the director of MIT’s Plasma Science and Fusion Center, pointed out that despite global efforts to reduce CO2 emissions, “we use exactly the same percentage of carbon-based products to generate energy as 10 years ago, or 20 years ago.” To make a real difference in global emissions, “we need to make really massive amounts of carbon-free energy.”

    Fusion, the process that powers the sun, is a particularly promising pathway, because the fuel, derived from water, is virtually inexhaustible. By using recently developed HTS material to generate the powerful magnetic fields needed to produce a sustained fusion reaction, the MIT-led project, which led to a spinoff company called Commonwealth Fusion Systems, was able to radically reduce the required size of a fusion reactor, Whyte explained. Using this approach, the company, in collaboration with MIT, expects to have a fusion system that produces net energy by the middle of this decade, and be ready to build a commercial plant to produce power for the grid early in the next. Meanwhile, at least 25 other private companies are also attempting to commercialize fusion technology. “I think we can take some credit for helping to spawn what is essentially now a new industry in the United States,” Whyte said.

    Fusion offers the potential, along with existing solar and wind technologies, to provide the emissions-free power the world needs, Whyte says, but that’s only half the problem, the other part being how to get that power to where it’s needed, when it’s needed. “How do we adapt these new energy sources to be as compatible as possible with everything that we have already in terms of energy delivery?”

    Part of the way to find answers to that, he suggested, is more collaborative work on these issues that cut across disciplines, as well as more of the kinds of cross-cutting conversations and interactions that took place in this panel discussion. More

  • in

    A new mathematical “blueprint” is accelerating fusion device development

    Developing commercial fusion energy requires scientists to understand sustained processes that have never before existed on Earth. But with so many unknowns, how do we make sure we’re designing a device that can successfully harness fusion power?

    We can fill gaps in our understanding using computational tools like algorithms and data simulations to knit together experimental data and theory, which allows us to optimize fusion device designs before they’re built, saving much time and resources.

    Currently, classical supercomputers are used to run simulations of plasma physics and fusion energy scenarios, but to address the many design and operating challenges that still remain, more powerful computers are a necessity, and of great interest to plasma researchers and physicists.

    Quantum computers’ exponentially faster computing speeds have offered plasma and fusion scientists the tantalizing possibility of vastly accelerated fusion device development. Quantum computers could reconcile a fusion device’s many design parameters — for example, vessel shape, magnet spacing, and component placement — at a greater level of detail, while also completing the tasks faster. However, upgrading to a quantum computer is no simple task.

    In a paper, “Dyson maps and unitary evolution for Maxwell equations in tensor dielectric media,” recently published in Physics Review A, Abhay K. Ram, a research scientist at the MIT Plasma Science and Fusion Center (PSFC), and his co-authors Efstratios Koukoutsis, Kyriakos Hizanidis, and George Vahala present a framework that would facilitate the use of quantum computers to study electromagnetic waves in plasma and its manipulation in magnetic confinement fusion devices.

    Quantum computers excel at simulating quantum physics phenomena, but many topics in plasma physics are predicated on the classical physics model. A plasma (which is the “dielectric media” referenced in the paper’s title) consists of many particles — electrons and ions — the collective behaviors of which are effectively described using classic statistical physics. In contrast, quantum effects that influence atomic and subatomic scales are averaged out in classical plasma physics.  

    Furthermore, the descriptive limitations of quantum mechanics aren’t suited to plasma. In a fusion device, plasmas are heated and manipulated using electromagnetic waves, which are one of the most important and ubiquitous occurrences in the universe. The behaviors of electromagnetic waves, including how waves are formed and interact with their surroundings, are described by Maxwell’s equations — a foundational component of classical plasma physics, and of general physics as well. The standard form of Maxwell’s equations is not expressed in “quantum terms,” however, so implementing the equations on a quantum computer is like fitting a square peg in a round hole: it doesn’t work.

    Consequently, for plasma physicists to take advantage of quantum computing’s power for solving problems, classical physics must be translated into the language of quantum mechanics. The researchers tackled this translational challenge, and in their paper, they reveal that a Dyson map can bridge the translational divide between classical physics and quantum mechanics. Maps are mathematical functions that demonstrate how to take an input from one kind of space and transform it to an output that is meaningful in a different kind of space. In the case of Maxwell’s equations, a Dyson map allows classical electromagnetic waves to be studied in the space utilized by quantum computers. In essence, it reconfigures the square peg so it will fit into the round hole without compromising any physics.

    The work also gives a blueprint of a quantum circuit encoded with equations expressed in quantum bits (“qubits”) rather than classical bits so the equations may be used on quantum computers. Most importantly, these blueprints can be coded and tested on classical computers.

    “For years we have been studying wave phenomena in plasma physics and fusion energy science using classical techniques. Quantum computing and quantum information science is challenging us to step out of our comfort zone, thereby ensuring that I have not ‘become comfortably numb,’” says Ram, quoting a Pink Floyd song.

    The paper’s Dyson map and circuits have put quantum computing power within reach, fast-tracking an improved understanding of plasmas and electromagnetic waves, and putting us that much closer to the ideal fusion device design.    More