More stories

  • in

    Evaluating battery revenues for offshore wind farms using advanced modeling

    Lithium-ion battery technologies currently dominate the advanced energy storage market — a sector of increasing importance as more focus is put on variable renewable energy generation and reliability to help decarbonize the global energy system. But according to MIT researchers, prevailing battery models can actually overestimate the battery’s revenue in an energy storage system by 35 percent.
    “Current modeling is not very representative of how these batteries actually operate,” says MIT Energy Initiative (MITEI) research scientist Apurba Sakti. “These models often do not account for degradation, or the lifetime of the batteries, which directly impacts the costs and the added value of the energy storage system.”
    To address this gap, Sakti worked with colleagues in the MIT Laboratory for Information and Decision Systems (LIDS) to investigate six mathematical representations, incorporating increasing degrees of detail and representation of battery degradation, to evaluate energy- and capacity-market revenues generated from the pairing of a battery energy storage system (BESS)  — in this case, a lithium-ion battery — with an offshore wind farm.
    Their findings were recently published in the journal Applied Energy in a paper by Sakti, the principal investigator; Mehdi Jafari, the lead author and a postdoc in MIT LIDS; and Audun Botterud, a principal research scientist in MIT LIDS with a co-appointment at Argonne National Laboratory.
    The researchers first analyzed the current predominant modeling method, which assumes fixed amounts for elements of the battery’s performance, such as round-trip efficiency and rated power capability, and neglects the degradation that would occur due to the battery’s capacity fade (the decrease in charge that the battery can hold) over time, and cycling. The researchers then developed and evaluated five enhanced models that better reflect how a battery would actually operate in a physical space by accounting for this capacity degradation, as well as power limits due its state of charge and efficiencies as a function of the discharge power. Their investigation reveals that the potential value of a battery is directly tied to the way it cycles and discharges power.
    After comparing the five advanced models, the researchers determined that the “SUM” approach was the best choice to evaluate their case study: an offshore wind farm in New York. An important feature of this particular model is that it accounts for degradation as a sum of the capacity fades in battery cells caused by cycling (resulting from charging and discharging the battery) and calendar aging (which happens as a function of time, regardless of use). With this approach, a given battery only cycles if the revenues cover the costs of capacity fade.
    Using the SUM model with price and wind data for New York during 2010-13, the researchers evaluated four battery storage and offshore wind system designs — an offshore wind farm with no BESS, a BESS located onshore, a BESS located offshore, and a hybrid system utilizing BESSs both on- and off-shore — to evaluate the impacts of the battery system’s location on its overall profitability. After incorporating other decision factors, such as wind curtailment, cable sizing, and dispatch of the BESS, they found that locating the battery system onshore while operating within its full state-of-charge window yields the highest revenue potential and can compensate for some of the degradation-related costs.
    “Energy storage is frequently identified as a key enabler for a large-scale expansion of renewable energy in the power grid. However, batteries are still a new asset type in the electricity system, and there are many questions about how to best use them,” says Botterud. “Our research on improved battery representation in power system optimization models enables more realistic assessments of efficient pathways toward a decarbonized energy system.”
    To this end, their analysis showed that battery revenues can be significantly overestimated when using the less-advanced model currently employed by researchers to evaluate the added value of a battery in a given energy storage system. Using the advanced model that accounted for changes in battery efficiency, the researchers demonstrated that battery revenues in the energy- and capacity- markets for the test case were not great enough to recoup the investment costs of the battery. The added value of a megawatt hour (MWh) of energy storage varied from $2 to $4.50 per MWh of wind energy, leading to a break-even cost for the battery system ranging from $50 to $115 per kilowatt hour.
    “With our advanced battery modeling approaches, the energy storage asset value can be estimated more accurately, which will help future investment and operation decisions,” says Jafari. “Moreover, accounting for the dynamic performance and degradation behavior of battery energy storage can change our assessment of its economic value and provide the opportunity for other emerging technologies, such as flow batteries or hydrogen-based storage, through more accurate comparisons.”
    Sakti adds that “With concerns around a lithium-ion battery technology lock-in [which, in essence, means that this dominant technology will drive out its competitors, as evidenced by multiple bankruptcies in the battery industry], our analyses can help investors and policy-makers understand trade-offs better, as well as inform research-level decisions. Improved accounting of life-cycle costs and benefits across multiple applications beyond the primary use of these batteries — for instance, using a battery for secondary grid-level services once it has reached its end of life in a car — can also benefit from our work.”
    In future research, the researchers plan to study a wider range of battery chemistries and their potential value compared to lithium-ion batteries. They will also build on their current work by considering other spatio-temporal variations that might affect the value of energy storage, such as geographical locations, battery applications, and alternate revenue streams.
    “Overall, we are interested in developing improved analytics for low-carbon energy systems,” says Botterud. “This includes computationally efficient algorithms that can address the variability and uncertainty in renewable resources, and also model formulations that can answer key questions about electricity market design and energy-environmental policy that affect the ongoing energy transition across the globe.”
    This research was supported by Equinor ASA through MITEI’s Low-Carbon Energy Center for Energy Storage. More

  • in

    Lessons from the Clean Air Car Race 50 years later

    The year 1970 was a milestone for efforts to combat air pollution. On April 22, the first Earth Day was celebrated. The 1970 Clean Air Act was the first policy to establish federal regulations on car and industry emissions. In July, President Richard M. Nixon announced his plan to establish the U.S. Environmental Protection Agency (EPA) by the end of the year. In the midst of this progress, a team of MIT students and faculty, with assistance from Caltech, organized the Clean Air Car Race — a competition to see which of the many entrants could make the 3,600 miles from MIT to Caltech in a fast, rally-style race, while meeting new stringent emissions standards.
    “It was an untidy operation that took a heck of a lot of managing,” recalls John Heywood, professor of mechanical engineering. “It amazed me just how talented and motivated the young people who organized the race were.” Then a junior faculty member, Heywood helped the students who organized the race, and served as a chaperone for the event.
    Concerns about air pollution had been mounting for years. Thick clouds of smog hovered over major cities — something students at Caltech in Pasadena, a Los Angeles suburb, felt acutely. In 1968, MIT and Caltech challenged each other to an electric car race with MIT’s car heading west toward Pasadena and Caltech’s car having Cambridge, Massachusetts, as its destination. In Tucson, Arizona, MIT’s car broke down and Caltech was declared the victor.
    Shortly after returning from defeat, students began talk of a rematch. That’s when Robert McGregor ’69 SM ’70 entered the picture. As the only graduate student in the room, McGregor became the de facto leader and was named the chair of the organizing committee for what became the Clean Air Car Race.
    Unlike the race two years before, the organizing committee decided to open the competition up to any college that wanted to participate. As chair of the committee, McGregor took the lead on reaching out to contacts in government and industry. Quickly, the race gained the attention of the National Air Pollution Control Administration, the EPA’s predecessor.
    “The federal government was very interested in supporting these upstart students who wanted to show the auto industry that we could actually build a vehicle with the emission controls that could achieve the future standards that had been proposed by the federal government in the Clean Air Act,” says McGregor.
    Emission standards and regulations were still novel to many reluctant automotive companies. “The auto industry was in its early days of learning what it was like to be regulated, and they were under tremendous pressure. Meanwhile, the regulators were still learning how to regulate, and set realistic standards,” says Heywood.
    Several car companies also supported the Clean Air Car Race. General Motors provided vehicles to teams who wanted to either modify them as a participating vehicle in the race or use them to transport teams. Ford Motor Company offered the use of their mobile emissions laboratory to test the cars’ emissions in Cambridge and again in Pasadena.
    As the pieces began to fall into place, the organizing team established the rules of the race. Participating cars had to have four wheels, be able to carry two people, and meet the proposed 1975-76 federal emissions standards for the amount of hydrocarbons, carbon monoxide, and nitrogen oxides leaving the car’s exhaust pipe.
    Roughly 50 entrants from various universities and a few high schools entered the race with a range of vehicles. These were mainly modified internal combustion engine cars, with some electric cars powered by massive batteries, hybrid vehicles, and one car powered by a gas turbine.
    On Aug. 24, 1970, the cars set off due west from Massachusetts Avenue. The race was scheduled to last six days, with stops in Ontario; Michigan; Illinois; Oklahoma; Texas; and Arizona before crossing the finish line on Aug. 30 on Caltech’s campus in Pasadena.
    Each leg of the journey was meant to last eight to 10 hours. The battery-powered electric vehicles participating ended up being at a disadvantage. The route was outfitted with charging stations every 60 miles. The charging process would take about an hour, meaning the electric cars took double the amount of time to finish each leg, clocking in 16- to 20-hour days.
    As chaperone, Heywood would wake up before the sun each day and knock on dorm room doors to wake up the tired and still-asleep college student teams. During the day, he would crunch numbers in the back of a car to figure out each car’s fuel economy.
    “I would sit in the back of a bumpy station wagon, using my slide rule to calculate the fuel economy for each entrant,” says Heywood. “That’s what engineers do — you do what is needed regardless of the environment.”
    Almost all the entrants crossed the finish line in Pasadena. When MIT’s gas turbine car, which was driven by one of McGregor’s fraternity brothers, barreled into Caltech’s campus, it melted the finish line banner with its blast of hot burned gases from its chimney exhaust pipe. The judging panel declared Wayne State University’s entrant, a gasoline-engine car with a tightly controlled fuel-injection system, the winner.
    In the end, the race was less about who placed first or last and more about demonstrating that the hurdles to having cleaner emissions in cars were not as insurmountable as some in the auto industry then thought.
    “In those early days of emission regulations, having a bunch of college kids do something seemingly of their own initiative and trying new creative things really helped show Detroit the way, in a sense,” adds Heywood.
    The Clean Air Car Race had an immediate impact on policies and regulations. Participants of the race were called to testify both for state legislators and for Washington, D.C.
    “There was this groundswell of young people participating in the debate that would confirm that where the federal government was headed with these proposed standards was not unrealistic,” says McGregor. “We were a contributing factor to the EPA being able to stick with those standards that they had proposed and getting the auto industry to comply.”
    Fifty years later, there are still lessons to be learned from the Clean Air Car Race. McGregor extols the virtue of “competitive engineering” as a way to galvanize young students into action. Heywood, meanwhile, sees parallels with current-day issues surrounding greenhouse gas emissions. He suggests that perhaps a friendly competition among talented engineering students could move the needle in the right direction, as it did 50 years ago. More

  • in

    MIT News – Energy

    MIT News – EnergyThe factory of the future, batteries not includedFor student researchers, no pause for the pandemicMobility Systems Center awards four projects for low-carbon transportation researchMIT researchers and Wyoming representatives explore energy and climate solutionsAssessing the value of battery energy storage in future power grids3 Questions: Asegun Henry on five “grand thermal challenges” to stem the tide of global warmingMIT Energy Conference goes virtualNovel gas-capture approach advances nuclear fuel managementLetter from President Reif: Tackling the grand challenges of climate changeCovid-19 shutdown led to increased solar power outputDecarbonize and diversifyA new approach to carbon captureInnovations in environmental training for the mining industryResearchers find benefits of solar photovoltaics outweigh costsStartup with MIT roots develops lightweight solar panelsA layered approach to safetyControlling plasma and plasma turbulenceLighting the way to better battery technologyMaking nuclear energy cost-competitiveSolar energy farms could offer second life for electric vehicle batteriesTransportation policymaking in Chinese citiesThe quest for practical fusion energy sourcesA scientist turns to entrepreneurshipQ&A: Energy studies at MIT and the next generation of energy leadersTechnique could enable cheaper fertilizer productionUnderstanding how fluids heat or cool surfacesAssociate Professor Amy Moran-Thomas receives the 2020 Levitan Prize in the HumanitiesShedding light on complex power systemsEnergy economics class inspires students to pursue clean energy careersEvaluating the global energy systemReducing delays in wireless networksNewly discovered enzyme “square dance” helps generate DNA building blocksFusion researchers endorse push for pilot power plant in US3 Questions: Emre Gençer on the evolving role of hydrogen in the energy systemNew approach to sustainable building takes shape in BostonMaking a remarkable material even betterA material’s insulating properties can be tuned at willMIT continues to advance toward greenhouse gas reduction goalsMaintaining the equipment that powers our worldResearchers develop a roadmap for growth of new solar cellsDecarbonizing the making of consumer productsNew electrode design may lead to more powerful batteriesPowering the planetFor cheaper solar cells, thinner really is betterUnderstanding combustionReducing risk, empowering resilience to disruptive global changeStudents propose plans for a carbon-neutral campusZeroing in on decarbonizationPathways to a low-carbon futurePreventing energy loss in windows MIT news feed about: Energy en Thu, 20 Aug 2020 04:00:00 +0000 Everactive provides an industrial “internet of things” platform built on its battery-free sensors. Thu, 20 Aug 2020 00:00:00 -0400 Zach Winn | MIT News Office Many analysts have predicted an explosion in the number of industrial “internet of things” (IoT) devices that will come online over the next decade. Sensors play a big role in those forecasts. Unfortunately, sensors come with their own drawbacks, many of which are due to the limited energy supply and finite lifetime of their batteries. Now the startup Everactive has developed industrial sensors that run around the clock, require minimal maintenance, and can last over 20 years. The company created the sensors not by redesigning its batteries, but by eliminating them altogether. The key is Everactive’s ultra-low-power integrated circuits, which harvest energy from sources like indoor light and vibrations to generate data. The sensors continuously send that data to Everactive’s cloud-based dashboard, which gives users real time insights, analysis, and alerts to help them leverage the full power of industrial IoT devices. “It’s all enabled by the ultra-low-power chips that support continuous monitoring,” says Everactive Co-Chief Technology Officer David Wentzloff SM ’02, PhD ’07. “Because our source of power is unlimited, we’re not making tradeoffs like keeping radios off or doing something else [limiting] to save battery life.” Everactive builds finished products on top of its chips that customers can quickly deploy in large numbers. Its first product monitors steam traps, which release condensate out of steam systems. Such systems are used in a variety of industries, and Everactive’s customers include companies in sectors like oil and gas, paper, and food production. Everactive has also developed a sensor to monitor rotating machinery, like motors and pumps, that runs on the second generation of its battery-free chips. By avoiding the costs and restrictions associated with other sensors, the company believes it’s well-positioned to play a role in the IoT-powered transition to the factory of the future. “This is technology that’s totally maintenance free, with no batteries, powered by harvested energy, and always connected to the cloud. There’s so many things you can do with that, it’s hard to wrap your head around,” Wentzloff says. Breaking free from batteries Wentzloff and his Everactive co-founder and co-CTO Benton Calhoun SM ’02, PhD ’06 have been working on low-power circuit design for more than a decade, beginning with their time at MIT. They both did their PhD work in the lab of Anantha Chandrakasan, who is currently the Vannevar Bush Professor of Electrical Engineering and Computer Science and the dean of MIT’s School of Engineering. Calhoun’s research focused on low-power digital circuits and memory while Wentzloff’s focused on low power radios. After earning their PhDs, both men became assistant professors at the schools they attended as undergraduates — Wentzloff at the University of Michigan and Calhoun at the University of Virginia — where they still teach today. Even after settling in different parts of the country, they continued collaborating, applying for joint grants and building circuit-based systems that combined their areas of research. The collaboration was not an isolated incident: The founders have maintained relationships with many of their contacts from MIT. “To this day I stay in touch with my colleagues and professors,” Wentzloff says. “It’s a great group to be associated with, especially when you talk about the integrated circuit space. It’s a great community, and I really value and appreciate that experience and those connections that have come out of it. That’s far and away the longest impression MIT has left on my career, those people I continue to stay in touch with. We’re all helping each other out.” Wentzloff and Calhoun’s academic labs eventually created a battery-free physiological monitor that could track a user’s movement, temperature, heart rate, and other signals and send that data to a phone, all while running on energy harvested from body heat. “That’s when we decided we should look at commercializing this technology,” Wentzloff says. In 2014, they partnered with semiconductor industry veteran Brendan Richardson to launch the company, originally called PsiKick. In the beginning, when Wentzloff describes the company as “three guys and a dog in a garage,” the founders sought to reimagine circuit designs that included features of full computing systems like sensor interfaces, processing power, memory, and radio signals. They also needed to incorporate energy harvesting mechanisms and power management capabilities. “We wiped the slate clean and had a fresh start,” Wentzloff recalls. The founders initially attempted to sell their chips to companies to build solutions on top of, but they quickly realized the industry wasn’t familiar enough with battery-free chips. “There’s an education level to it, because there’s a generation of engineers used to thinking of systems design with battery-operated chips,” Wentzloff says. The learning curve led the founders to start building their own solutions for customers. Today Everactive offers its sensors as part of a wider service that incorporates wireless networks and data analytics. The company’s sensors can be powered by small vibrations, lights inside a factory as dim as 100 lux, and heat differentials below 10 degrees Fahrenheit. The devices can sense temperature, acceleration, vibration, pressure, and more. The company says its sensors cost significantly less to operate than traditional sensors and avoid the maintenance headache that comes with deploying thousands of battery-powered devices. For instance, Everactive considered the cost of deploying 10,000 traditional sensors. Assuming a three-year battery life, the customer would need to replace an average of 3,333 batteries each year, which comes out to more than nine a day. The next technological revolution By saving on maintenance and replacement costs, Everactive customers are able to deploy more sensors. That, combined with the near-continuous operation of those sensors, brings a new level of visibility to operations. “[Removing restrictions on sensor installations] starts to give you a sixth sense, if you will, about how your overall operations are running,” Calhoun says. “That’s exciting. Customers would like to wave a magic wand and know exactly what’s going on wherever they’re interested. The ability to deploy tens of thousands of sensors gets you close to that magic wand.” With thousands of Everactive’s steam trap sensors already deployed, Wentzloff believes its sensors for motors and other rotating machinery will make an even bigger impact on the IoT market. Beyond Everactive’s second generation of products, the founders say their sensors are a few years away from being translucent, flexible, and the size of a postage stamp. At that point customers will simply need to stick the sensors onto machines to start generating data. Such ease of installation and use would have implications far beyond the factory floor. “You hear about smart transportation, smart agriculture, etc.,” Calhoun says. “IoT has this promise to make all of our environments smart, meaning there’s an awareness of what’s going on and use of that information to have these environments behave in ways that anticipate our needs and are as efficient as possible. We believe battery-less sensing is required and inevitable to bring about that vision, and we’re excited to be a part of that next computing revolution.” The startup Everactive uses ultra-low power chips to run its industrial “internet of things” platform on battery-less sensors. Image courtesy of Everactive Undergraduates Aljazzy Alahmadi, Andrea Garcia, and Quynh Nguyen are sustaining the nuclear science and engineering research mission from around the world. Tue, 18 Aug 2020 15:20:00 -0400 Leda Zimmerman | Department of Nuclear Science and Engineering In mid-March, when the Covid-19 pandemic darkened MIT classrooms and labs, lights switched on for undergraduate research taking place remotely. Zooming in from time zones often distant from Cambridge, Massachusetts, many students were able to continue undergraduate research opportunities (UROPs) made possible by nuclear science and engineering faculty. Advancing projects begun during January independent activities period or the start of spring semester, students overcame significant obstacles to make their research experiences meaningful while working from home — whether that home was in a manicured U.S. suburban subdivision, a palm-lined street in the Middle East, or, in the case of Quynh T. Nguyen, surrounded by local rice fields in Vietnam. “It was tough returning to Dong Hoi City, because I thought that meant I was done with my UROP for the semester,” says the rising junior majoring in physics. Working with Assistant Professor Mingda Li, Nguyen had been investigating the thermal transport properties of materials, growing crystals in the lab. One goal of such work is optimizing heat transfer in materials to improve efficiency in energy production. “I was so grateful when Professor Li found ways for me to stay on the project from home,” he says. While finishing his spring classes online — a major undertaking given the 11-hour time difference and difficulties accessing MIT servers — Nguyen pivoted with enthusiasm from lab work to developing machine learning applications for the same project. “I’ve been excited about machine learning since taking a class, and so actually this UROP has allowed me to leverage my knowledge in an extremely new and interesting way for me,” says Nguyen. Aljazzy Alahmadi, a rising sophomore, managed to get back to Saudi Arabia the day before such international flights were halted. “I was in a UROP meeting when MIT emailed the news, and I didn’t think about anything except getting home as fast as possible,” she recalls. But soon after she settled into life in Dammam, a city of more than a million on the Persian Gulf, she was relieved to learn that she could continue her project with graduate student Saleem Aldajani, within the lab of Associate Professor Michael P. Short. “My work involves finding trends in the degradation of a stainless steel alloy often used in light water nuclear reactors when it’s under reactor-like thermal conditions,” she says. This kind of information might contribute to extended lifetimes for light water reactors. But after training with steel cutting and specialized spectroscopy techniques in the lab, her remote location necessitated a turn to data analysis instead. “I was kind of happy about this switch,” Alahmadi says. “When I began the project, I didn’t really grasp what it was all about — I was learning how to cut steel samples — so when I started focusing on datasets I could intellectually explore in a way I couldn’t before.” After she returned to her home in Katy, Texas, a small city in Houston’s shadow, Andrea Garcia, a rising sophomore, says she felt “kind of devastated.” Drawn to disciplines that would enable her to address environmental problems and climate change, Garcia had just decided to concentrate in materials science and engineering. “I had a lot of things planned for the rest of the semester,” she says, including a UROP in the Short lab. After hearing him lecture about the promise of fusion energy in the fall, Garcia had determined to learn more about nuclear energy more broadly. She leapt into Short’s project, spending weeks learning how to use lasers safely. “Then we got kicked out due to Covid,” says Garcia. “I thought there’d be no way for undergraduate researchers to keep doing the research, but Professor Short made it happen, offering to run experiments and send us the data.” Flying (mostly) solo Although routinely in touch with faculty and lab supervisors via email and Zoom meetings, the students were on their own for the most part during spring semester and beyond. While they found the physical isolation from a team challenging at times, the undergraduates also relished their independence. “I was analyzing data on irradiated samples of titanium aluminum metals, focused on thermal diffusivity, and was left to my own devices,” says Garcia. “Every week, we had to present our findings, and I came to feel a sense of ownership, that I was having an impact and that my work was achieving something.” Investigating electrical and thermal conductivity of crystals that feature some unique quantum properties proved fascinating to Nguyen, not least because it catalyzed him to “learn many new things related to machine learning on Coursera,” as well as to investigate domains of physics previously unfamiliar to him. He especially enjoyed prowling through vast online databases: “I find it amazing that scientists have built these repositories and made them available for everyone to access.” Alahmadi felt energized by the quest to find something of value in her datasets. “With this project, I felt I couldn’t leave until I reached a point of a deliverable,” she says. “I wanted to get a result, publish a paper, go to a conference — get the full experience of this.” Sticking with it Although their fall plans might be uncertain, these students remain anchored by their continuing research. Garcia, who found that she enjoyed using Python to create graphs mapping the properties of her material samples, says the experience reminded her “that computer science is a useful skill.” As a result, she hopes to bear down on her materials science major while taking more computer science courses. “My wildest dream, which keeps me going, is to incorporate power systems in Saudi that don’t use carbon,” Alahmadi says. She hopes to stick with her UROP, wherever she is living. “It’s taught me to open my eyes to all things so I can learn new skills, from acquiring new capabilities to make projects go faster, to collaborating well with other lab members.” Nguyen, who is targeting a career in applied physics, feels his experience with the UROP “is invaluable for my future,” he says. He has co-authored a scientific publication, and feels deep ties to his Cambridge-based research group. He has come to view this difficult period not as an obstacle, but an opportunity. “It’s an unprecedented experience, working and communicating remotely,” he says. “We are all experiencing a painful pandemic, but as Professor Li notes we are living in a historic time that will one day be memorialized in movies and books, so it’s not all bad.” MIT undergraduates (left to right) Aljazzy Alahmadi, Andrea Garcia, and Quynh Nguyen were able to continue research opportunities made possible by nuclear science and engineering faculty. Photos courtesy of the students. Topics include Covid-19 and urban mobility, strategies for electric vehicle charging networks, and infrastructure and economics for hydrogen-fueled transportation. Tue, 18 Aug 2020 14:00:00 -0400 Turner Jackson | MIT Energy Initiative The Mobility Systems Center (MSC), one of the MIT Energy Initiative (MITEI)’s Low-Carbon Energy Centers, will fund four new research projects that will allow for deeper insights into achieving a decarbonized transportation sector. “Based on input from our Mobility Systems Center members, we have selected an excellent and diverse set of projects to initiate this summer,” says Randall Field, the center’s executive director. “The awarded projects will address a variety of pressing topics including the impacts of Covid-19 on urban mobility, strategies for electric vehicle charging networks, and infrastructure and economics for hydrogen-fueled transportation.” The projects are spearheaded by faculty and researchers from across the Institute, with experts in several fields including economics, urban planning, and energy systems. In addition to pursuing new avenues of research, the Mobility Systems Center also welcomes Jinhua Zhao as co-director. Zhao serves alongside Professor William H. Green, the Hoyt C. Hottel Professor in Chemical Engineering. Zhao is an associate professor in the Department of Urban Studies and Planning and the director of the JTL Urban Mobility Lab. He succeeds Sanjay Sarma, the vice president for open learning and the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering. “Jinhua already has a strong relationship with mobility research at MITEI, having been a major contributor to MITEI’s Mobility of the Future study and serving as a principal investigator for MSC projects. He will provide excellent leadership to the center,” says MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. “We also thank Sanjay for his valuable leadership during the MSC’s inaugural year, and look forward to collaborating with him in his role as vice president for open learning — an area that is vitally important in MIT’s response to research and education in the Covid-19 era.” The impacts of Covid-19 on urban mobility The Covid-19 pandemic has transformed all aspects of life in a remarkably short amount of time, including how, when, and why people travel. In addition to becoming the center’s new co-director, Zhao will lead one of the MSC’s new projects to identify how Covid-19 has impacted use of, preferences toward, and energy consumption of different modes of urban transportation, including driving, walking, cycling, and most dramatically, ridesharing services and public transit. Zhao describes four primary objectives for the project. The first is to quantify large-scale behavioral and preference changes in response to the pandemic, tracking how these change from the beginning of the outbreak through the medium-term recovery period. Next, the project will break down these changes by sociodemographic groups, with a particular emphasis on low-income and marginalized communities. The project will then use these insights to posit how changes to infrastructure, equipment, and policies could help shape travel recovery to be more sustainable and equitable. Finally, Zhao and his research team will translate these behavioral changes into energy consumption and carbon dioxide emissions estimates. “We make two distinctions: first, between impacts on amount of travel (e.g., number of trips) and impacts on type of travel (e.g., mixture of different travel modes); and second, between temporary shocks and longer-term structural changes,” says Zhao. “Even when the coronavirus is no longer a threat to public health, we expect to see lasting effects on activity, destination, and mode preferences. These changes, in turn, affect energy consumption and emissions from the transportation sector.” The economics of electric vehicle charging In the transition toward a low-carbon transportation system, refueling infrastructure is crucial for the viability of any alternative fuel vehicle. Jing Li, an assistant professor in the MIT Sloan School of Management, aims to develop a model of consumer vehicle and travel choices based on data regarding travel patterns, electric vehicle (EV) charging demand, and EV adoption. Li’s research team will implement a two-pronged approach. First, they will quantify the value that each charging location provides to the rest of the refueling network, which may be greater than that location’s individual profitability due to network spillovers. Second, they will simulate the profits of EV charging networks and the adoption rates of EVs using different pricing and location strategies. “We hypothesize that some charging locations may not be privately profitable, but would be socially valuable. If so, then a charging network may increase profits by subsidizing entry at ‘missing’ locations that are underprovided by the market,” she says. If proven correct, this research could be valuable in making EVs accessible to broader portions of the population.  Cost reduction and emissions savings strategies for hydrogen mobility systems Hydrogen-based transportation and other energy services have long been discussed, but what role will they play in a clean energy transition? Jessika Trancik, an associate professor of energy studies in the Institute for Data, Systems, and Society, will examine and identify cost-reducing and emissions-saving mechanisms for hydrogen-fueled mobility services. She plans to analyze production and distribution scenarios, evolving technology costs, and the lifecycle greenhouse gas emissions of hydrogen-based mobility systems, considering both travel activity patterns and fluctuations in the primary energy supply for hydrogen production. “Modeling the mechanisms through which the design of hydrogen-based mobility systems can achieve lower costs and emissions can help inform the development of future infrastructure,” says Trancik. “Models and theory to inform this development can have a significant impact on whether or not hydrogen-based systems succeed in contributing measurably to the decarbonization of the transportation sector. The goals for the project are threefold: quantifying the emissions and costs of hydrogen production and storage pathways, with a focus on the potential use of excess renewable energy; modeling costs and requirements of the distribution and refueling infrastructure for different forms of transportation, from personal vehicles to long-haul trucking based on existing and projected demand; and modeling the costs and emissions associated with the use of hydrogen-fueled mobility services. Analysis of forms of hydrogen for use in transportation MITEI research scientist Emre Gençer will lead a team including Yang Shao-Horn, the W.M. Keck Professor of Energy in the Department of Materials Science and Engineering, and Dharik Mallapragada, a MITEI research scientist, to assess the alternative forms of hydrogen that could serve the transportation sector. This project will develop an end-to-end techno-economic and greenhouse gas emissions analysis of hydrogen-based energy supply chains for road transportation. The analysis will focus on two classes of supply chains: pure hydrogen (transported as a compressed gas or cryogenic liquid) and cyclic supply chains (based on liquid organic hydrogen carriers for powering on-road transportation). The low energy density of gaseous hydrogen is currently a barrier to the large-scale deployment of hydrogen-based transportation; liquid carriers are a potential solution in enabling an energy-dense means for storing and delivering hydrogen fuel. The scope of the analysis will include the generation, storage, distribution, and use of hydrogen, as well as the carrier molecules that are used in the supply chain. Additionally, the researchers will estimate the economic and environmental performance of various technology options across the entire supply chain. “Hydrogen has long been discussed as a fuel of the future,” says Shao-Horn. “As the energy transition progresses, opportunities for carbon-free fuels will only grow throughout the energy sector. Thorough analyses of hydrogen-based technologies are vital for providing information necessary to a greener transportation and energy system.” Broadening MITEI’s mobility research portfolio The mobility sector needs a multipronged approach to mitigate its increasing environmental impact. The four new projects will complement the MSC’s current portfolio of research projects, which includes an evaluation of operational designs for highly responsive urban last-mile delivery services; a techno-economic assessment of options surrounding long-haul road freight; an investigation of tradeoffs between data privacy and performance in shared mobility services; and an examination of mobility-as-a-service and its implications for private car ownership in U.S. cities.  “The pressures to adapt our transportation systems have never been greater with the Covid-19 crisis and increasing environmental concerns. While new technologies, business models, and governmental policies present opportunities to advance, research is needed to understand how they interact with one another and help to shape our mobility patterns,” says Field. “We are very excited to have such a strong breadth of projects to contribute multidisciplinary insights into the evolution of a cleaner, more sustainable mobility future.” The MIT Energy Initiative’s Mobility Systems Center has selected four new low-carbon transportation research projects to add to its growing portfolio. Photo: Benjamin Cruz Members of Wyoming’s government and public university met with MIT researchers to discuss climate-friendly economic growth. Tue, 11 Aug 2020 00:00:00 -0400 Environmental Solutions Initiative The following is a joint release from the MIT Environmental Solutions Initiative and the office of Wyoming Governor Mark Gordon. The State of Wyoming supplies 40 percent of the country’s coal used to power electric grids. The production of coal and other energy resources contributes over half of the state’s revenue, funding the government and many of the social services — including K-12 education — that residents rely on. With the consumption of coal in a long-term decline, decreased revenues from oil and natural gas, and growing concerns about carbon dioxide (CO2) emissions, the state is actively looking at how to adapt to a changing marketplace. Recently, representatives from the Wyoming Governor’s Office, University of Wyoming School of Energy Resources, and Wyoming Energy Authority met with faculty and researchers from MIT in a virtual, two-day discussion to discuss avenues for the state to strengthen its energy economy while lowering CO2 emissions. “This moment in time presents us with an opportunity to seize: creating a strong economic future for the people of Wyoming while protecting something we all care about — the climate,” says Wyoming Governor Mark Gordon. “Wyoming has tremendous natural resources that create thousands of high-paying jobs. This conversation with MIT allows us to consider how we use our strengths and adapt to the changes that are happening nationally and globally.” The two dozen participants from Wyoming and MIT discussed pathways for long-term economic growth in Wyoming, given the global need to reduce carbon dioxide emissions. The wide-ranging and detailed conversation covered topics such as the future of carbon capture technology, hydrogen, and renewable energy; using coal for materials and advanced manufacturing; climate policy; and how communities can adapt and thrive in a changing energy marketplace. The discussion paired MIT’s global leadership in technology development, economic modeling, and low-carbon energy research with Wyoming’s unique competitive advantages: its geology that provides vast underground storage potential for CO2; its existing energy and pipeline infrastructure; and the tight bonds between business, government, and academia. “Wyoming’s small population and statewide support of energy technology development is an advantage,” says Holly Krutka, executive director of the University of Wyoming’s School of Energy Resources. “Government, academia, and industry work very closely together here to scale up technologies that will benefit the state and beyond. We know each other, so we can get things done and get them done quickly.” “There’s strong potential for MIT to work with the State of Wyoming on technologies that could not only benefit the state, but also the country and rest of the world as we combat the urgent crisis of climate change,” says Bob Armstrong, director of the MIT Energy Initiative, who attended the forum. “It’s a very exciting conversation.” The event was convened by the MIT Environmental Solutions Initiative as part of its Here & Real project, which works with regions in the United States to help further initiatives that are both climate-friendly and economically just. “At MIT, we are focusing our attention on technologies that combat the challenge of climate change — but also, with an eye toward not leaving people behind,” says Maria Zuber, MIT’s vice president for research and the E. A. Griswold Professor of Geophysics. “It is inspiring to see Wyoming’s state leadership seriously committed to finding solutions for adapting the energy industry, given what we know about the risks of climate change,” says Laur Hesse Fisher, director of the Here & Real project. “Their determination to build an economically and environmentally sound future for the people of Wyoming has been evident in our discussions, and I am excited to see this conversation continue and deepen.” The Wyoming State Capitol in Cheyenne Storage value increases as variable renewable energy supplies an increasing share of electricity, but storage cost declines are needed to realize full potential. Wed, 12 Aug 2020 00:00:00 -0400 Kathryn Luu | MIT Energy Initiative In the transition to a decarbonized electric power system, variable renewable energy (VRE) resources such as wind and solar photovoltaics play a vital role due to their availability, scalability, and affordability. However, the degree to which VRE resources can be successfully deployed to decarbonize the electric power system hinges on the future availability and cost of energy storage technologies. In a paper recently published in Applied Energy, researchers from MIT and Princeton University examine battery storage to determine the key drivers that impact its economic value, how that value might change with increasing deployment over time, and the implications for the long-term cost-effectiveness of storage. “Battery storage helps make better use of electricity system assets, including wind and solar farms, natural gas power plants, and transmission lines, and that can defer or eliminate unnecessary investment in these capital-intensive assets,” says Dharik Mallapragada, the paper’s lead author. “Our paper demonstrates that this ‘capacity deferral,’ or substitution of batteries for generation or transmission capacity, is the primary source of storage value.” Other sources of storage value include providing operating reserves to electricity system operators, avoiding fuel cost and wear and tear incurred by cycling on and off gas-fired power plants, and shifting energy from low price periods to high value periods — but the paper showed that these sources are secondary in importance to value from avoiding capacity investments. For their study, the researchers — Mallapragada, a research scientist at the MIT Energy Initiative; Nestor Sepulveda SM’16, PhD ’20, a postdoc at MIT who was a MITEI researcher and nuclear science and engineering student at the time of the study; and fellow former MITEI researcher Jesse Jenkins SM ’14, PhD ’18, an assistant professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment at Princeton University — use a capacity expansion model called GenX to find the least expensive ways of integrating battery storage in a hypothetical low-carbon power system. They studied the role for storage for two variants of the power system, populated with load and VRE availability profiles consistent with the U.S. Northeast (North) and Texas (South) regions. The paper found that in both regions, the value of battery energy storage generally declines with increasing storage penetration. “As more and more storage is deployed, the value of additional storage steadily falls,” explains Jenkins. “That creates a race between the declining cost of batteries and their declining value, and our paper demonstrates that the cost of batteries must continue to fall if storage is to play a major role in electricity systems.” The study’s key findings include: The economic value of storage rises as VRE generation provides an increasing share of the electricity supply. The economic value of storage declines as storage penetration increases, due to competition between storage resources for the same set of grid services. As storage penetration increases, most of its economic value is tied to its ability to displace the need for investing in both renewable and natural gas-based energy generation and transmission capacity. Without further cost reductions, a relatively small magnitude (4 percent of peak demand) of short-duration (energy capacity of two to four hours of operation at peak power) storage is cost-effective in grids with 50-60 percent of electricity supply that comes from VRE generation. “The picture is more favorable to storage adoption if future cost projections ($150 per kilowatt-hour for four-hour storage) are realized,” notes Mallapragada. Relevance to policymakers The results of the study highlight the importance of reforming electricity market structures or contracting practices to enable storage developers to monetize the value from substituting generation and transmission capacity — a central component of their economic viability. “In practice, there are few direct markets to monetize the capacity substitution value that is provided by storage,” says Mallapragada. “Depending on their administrative design and market rules, capacity markets may or may not adequately compensate storage for providing energy during peak load periods.” In addition, Mallapragada notes that developers and integrated utilities in regulated markets can implicitly capture capacity substitution value through integrated development of wind, solar, and energy storage projects. Recent project announcements support the observation that this may be a preferred method for capturing storage value. Implications for the low-carbon energy transition The economic value of energy storage is closely tied to other major trends impacting today’s power system, most notably the increasing penetration of wind and solar generation. However, in some cases, the continued decline of wind and solar costs could negatively impact storage value, which could create pressure to reduce storage costs in order to remain cost-effective.  “It is a common perception that battery storage and wind and solar power are complementary,” says Sepulveda. “Our results show that is true, and that all else equal, more solar and wind means greater storage value. That said, as wind and solar get cheaper over time, that can reduce the value storage derives from lowering renewable energy curtailment and avoiding wind and solar capacity investments. Given the long-term cost declines projected for wind and solar, I think this is an important consideration for storage technology developers.”  The relationship between wind and solar cost and storage value is even more complex, the study found. “Since storage derives much of its value from capacity deferral, going into this research, my expectation was that the cheaper wind and solar gets, the lower the value of energy storage will become, but our paper shows that is not always the case,” explains Mallapragada. “There are some scenarios where other factors that contribute to storage value, such as increases in transmission capacity deferral, outweigh the reduction in wind and solar deferral value, resulting in higher overall storage value.” Battery storage is increasingly competing with natural gas-fired power plants to provide reliable capacity for peak demand periods, but the researchers also find that adding 1 megawatt (MW) of storage power capacity displaces less than 1 MW of natural gas generation. The reason: To shut down 1 MW of gas capacity, storage must not only provide 1 MW of power output, but also be capable of sustaining production for as many hours in a row as the gas capacity operates. That means you need many hours of energy storage capacity (megawatt-hours) as well. The study also finds that this capacity substitution ratio declines as storage tries to displace more gas capacity. “The first gas plant knocked offline by storage may only run for a couple of hours, one or two times per year,” explains Jenkins. “But the 10th or 20th gas plant might run 12 or 16 hours at a stretch, and that requires deploying a large energy storage capacity for batteries to reliably replace gas capacity.” Given the importance of energy storage duration to gas capacity substitution, the study finds that longer storage durations (the amount of hours storage can operate at peak capacity) of eight hours generally have greater marginal gas displacement than storage with two hours of duration. However, the additional system value from longer durations does not outweigh the additional cost of the storage capacity, the study finds.  “From the perspective of power system decarbonization, this suggests the need to develop cheaper energy storage technologies that can be cost-effectively deployed for much longer durations, in order to displace dispatchable fossil fuel generation,” says Mallapragada. To address this need, the team is preparing to publish a followup paper that provides the most extensive evaluation of the potential role and value of long-duration energy storage technologies to date. “We are developing novel insights that can guide the development of a variety of different long-duration energy storage technologies and help academics, private-sector companies and investors, and public policy stakeholders understand the role of these technologies in a low-carbon future,” says Sepulveda. This research was supported by General Electric through the MIT Energy Initiative’s Electric Power Systems Low-Carbon Energy Center.  MIT and Princeton University researchers find that the economic value of storage increases as variable renewable energy generation (from sources such as wind and solar) supplies an increasing share of electricity supply, but storage cost declines are needed to realize full potential. “Our mission here is to save humanity from extinction due to climate change,” says MIT professor. Mon, 10 Aug 2020 11:00:00 -0400 Jennifer Chu | MIT News Office More than 90 percent of the world’s energy use today involves heat, whether for producing electricity, heating and cooling buildings and vehicles, manufacturing steel and cement, or other industrial activities. Collectively, these processes emit a staggering amount of greenhouse gases into the environment each year. Reinventing the way we transport, store, convert, and use thermal energy would go a long way toward avoiding a global rise in temperature of more than 2 degrees Celsius — a critical increase that is predicted to tip the planet into a cascade of catastrophic climate scenarios. But, as three thermal energy experts write in a letter published today in Nature Energy, “Even though this critical need exists, there is a significant disconnect between current research in thermal sciences and what is needed for deep decarbonization.” In an effort to motivate the scientific community to work on climate-critical thermal issues, the authors have laid out five thermal energy “grand challenges,” or broad areas where significant innovations need to be made in order to stem the rise of global warming. MIT News spoke with Asegun Henry, the lead author and the Robert N. Noyce Career Development Associate Professor in the Department of Mechanical Engineering, about this grand vision. Q: Before we get into the specifics of the five challenges you lay out, can you say a little about how this paper came about, and why you see it as a call to action? A: This paper was born out of this really interesting meeting, where my two co-authors and I were asked to meet with Bill Gates and teach him about thermal energy. We did a several-hour session with him in October of 2018, and when we were leaving, at the airport, we all agreed that the message we shared with Bill needs to be spread much more broadly. This particular paper is about thermal science and engineering specifically, but it’s an interdisciplinary field with lots of intersections. The way we frame it, this paper is about five grand challenges that if solved, would literally alter the course of humanity. It’s a big claim — but we back it up. And we really need this to be declared as a mission, similar to the declaration that we were going to put a man on the moon, where you saw this concerted effort among the scientific community to achieve that mission. Our mission here is to save humanity from extinction due to climate change. The mission is clear. And this is a subset of five problems that will get us the majority of the way there, if we can solve them. Time is running out, and we need all hands on deck.  Q: What are the five thermal energy challenges you outline in your paper? A: The first challenge is developing thermal storage systems for the power grid, electric vehicles, and buildings. Take the power grid: There is an international race going on to develop a grid storage system to store excess electricity from renewables so you can use it at a later time. This would allow renewable energy to penetrate the grid. If we can get to a place of fully decarbonizing the grid, that alone reduces carbon dioxide emissions from electricity production by 25 percent. And the beauty of that is, once you decarbonize the grid you open up decarbonizing the transportation sector with electric vehicles. Then you’re talking about a 40 percent reduction of global carbon emissions. The second challenge is decarbonizing industrial processes, which contribute 15 percent of global carbon dioxide emissions. The big actors here are cement, steel, aluminum, and hydrogen. Some of these industrial processes intrinsically involve the emission of carbon dioxide, because the reaction itself has to release carbon dioxide for it to work, in the current form. The question is, is there another way? Either we think of another way to make cement, or come up with something different. It’s an extremely difficult challenge, but there are good ideas out there, and we need way more people thinking about this. The third challenge is solving the cooling problem. Air conditioners and refrigerators have chemicals in them that are very harmful to the environment, 2,000 times more harmful than carbon dioxide on a molar basis. If the seal breaks and that refrigerant gets out, that little bit of leakage will cause global warming to shift significantly. When you account for India and other developing nations that are now getting access to electricity infrastructures to run AC systems, the leakage of these refrigerants will become responsible for 15 to 20 percent of global warming by 2050. The fourth challenge is long-distance transmission of heat. We transmit electricity because it can be transmitted with low loss, and it’s cheap. The question is, can we transmit heat like we transmit electricity? There is an overabundance of waste heat available at power plants, and the problem is, where the power plants are and where people live are two different places, and we don’t have a connector to deliver heat from these power plants, which is literally wasted. You could satisfy the entire residential heating load of the world with a fraction of that waste heat. What we don’t have is the wire to connect them. And the question is, can someone create one? The last challenge is variable conductance building envelopes. There are some demonstrations that show it is physically possible to create a thermal material, or a device that will change its conductance, so that when it’s hot, it can block heat from getting through a wall, but when you want it to, you could change its conductance to let the heat in or out. We’re far away from having a functioning system, but the foundation is there. Q: You say that these five challenges represent a new mission for the scientific community, similar to the mission to land a human on the moon, which came with a clear deadline. What sort of timetable are we talking about here, in terms of needing to solve these five thermal problems to mitigate climate change? A: In short, we have about 20 to 30 years of business as usual, before we end up on an inescapable path to an average global temperature rise of over 2 degrees Celsius. This may seem like a long time, but it’s not when you consider that it took natural gas 70 years to become 20 percent of our energy mix. So imagine that now we have to not just switch fuels, but do a complete overhaul of the entire energy infrastructure in less than one third the time. We need dramatic change, not yesterday, but years ago. So every day I fear we will do too little too late, and we as a species may not survive Mother Earth’s clapback. MIT’s Asegun Henry on tackling five “grand thermal challenges” to stem the global warming tide: “Our mission here is to save humanity from extinction due to climate change.” Portrait photo courtesy of MIT MechE. Annual student-run energy conference pivots to successful online event with short notice in response to the coronavirus. Fri, 07 Aug 2020 16:55:00 -0400 Turner Jackson | MIT Energy Initiative For the past 14 years, the MIT Energy Conference — a two-day event organized by energy students — has united students, faculty, researchers, and industry representatives from around the world to discuss cutting-edge developments in energy. Under the supervision of Thomas “Trey” Wilder, an MBA candidate at the MIT Sloan School of Management, and a large team of student event organizers, the final pieces for the 2020 conference were falling into place by early March — and then the Covid-19 pandemic hit the United States. As the Institute canceled in-person events to reduce the spread of the virus, much of the planning that had gone into hosting the conference in its initial format was upended. The Energy Conference team had less than a month to transition the entire event — scheduled for early April — online. During the conference’s opening remarks, Wilder recounted the month leading up to the event. “Coincidently, the same day that we received the official notice that all campus events were canceled, we had a general body Energy Club meeting,” says Wilder. “All the leaders looked at each other in disbelief — seeing a lot of the work that we had put in for almost a year now, seemingly go down the drain. We decided that night to retain whatever value we could find from this event.” The team immediately started contacting vendors and canceling orders, issuing refunds to guests, and informing panelists and speakers about the conference’s new format. “One of the biggest issues was getting buy-in from the speakers. Everyone was new to this virtual world back at the end of March. Our speakers didn’t know what this was going to look like, and many backed out,” says Wilder. The team worked hard to find new speakers, with one even being brought on 12 hours before the start of the event. Another challenge posed by taking the conference virtual was learning the ins and outs of running a Zoom webinar in a remarkably short time frame. “With the webinar, there are so many functions that the host controls that really affect the outcome of the event. Similarly, the speakers didn’t quite know how to operate it, either.” In spite of the multitude of challenges posed by switching to an online format on a tight deadline, this year’s coordinating team managed to pull off an incredibly informative and timely conference that reached a much larger audience than those in years past. This was the first year the conference was offered for free online, which allowed for over 3,500 people globally to tune in — a marked increase from the 500 attendees planned for the original, in-person event. Over the course of two days, panelists and speakers discussed a wide range of energy topics, including electric vehicles, energy policy, and the future of utilities. The three keynote speakers were Daniel M. Kammen, a professor of energy and the chair of the Goldman School of Public Policy at the University of California at Berkeley; Rachel Kyte, the dean of the Tufts Fletcher School of Law and Diplomacy; and John Deutch, the Institute Professor of Chemistry at MIT. Many speakers modified their presentations to address Covid-19 and how it relates to energy and the environment. For example, Kammen adjusted his address to cover what those who are working to address the climate emergency can learn from the Covid-19 pandemic. He emphasized the importance of individual actions for both the climate crisis and Covid-19; how global supply chains are vulnerable in a crowded, denuded planet; and how there is no substitute for thorough research and education when tackling these issues. Wilder credits the team of dedicated, hardworking energy students as the most important contributors to the conference’s success. A couple of notable examples include Joe Connelly, an MBA candidate, and Leah Ellis, a materials science and engineering postdoc, who together managed the Zoom operations during the conference. They ensured that the panels and presentations flowed seamlessly. Anna Sheppard, another MBA candidate, live-tweeted throughout the conference, managed the YouTube stream, and responded to emails during the event, with assistance from Michael Cheng, a graduate student in the Technology and Policy Program. Wilder says MBA candidate Pervez Agwan “was the Swiss Army knife of the group”; he worked on everything from marketing to tickets to operations — and, because he had a final exam on the first day of the conference, Agwan even pulled an all-nighter to ensure that the event and team were in good shape. “What I loved most about this team was that they were extremely humble and happy to do the dirty work,” Wilder says. “Everyone was content to put their head down and grind to make this event great. They did not desire praise or accolades, and are therefore worthy of both.” The 2020 MIT Energy Conference organizers. Thomas “Trey” Wilder (bottom row, fourth from left), an MBA candidate at the MIT Sloan School of Management, spearheaded the organization of this year’s conference, which had less than a month to transition to a virtual event. Image: Trey Wilder Multidisciplinary team uses metal organic frameworks to extract radioactive krypton from fuel-reprocessing gasses. Fri, 24 Jul 2020 15:25:01 -0400 Peter Dunn | Department of Nuclear Science and Engineering Nuclear energy provides about 20 percent of the U.S. electricity supply, and over half of its carbon-free generating capacity.    Operations of commercial nuclear reactors produce small quantities of spent fuel, which in some countries is reprocessed to extract materials that can be recycled as fuel in other reactors. Key to the improvement of the economics of this fuel cycle is the capture of gaseous radioactive products of fission such as 85krypton. Therefore, developing efficient technology to capture and secure 85krypton from the mix of effluent gasses would represent a significant improvement in the management of used nuclear fuels. One promising avenue is the adsorption of gasses into an advanced type of soft crystalline material, metal organic frameworks (MOFs), which have extremely high porosity and enormous internal surface area and can incorporate a vast array of organic and inorganic components. Recently published research by a multidisciplinary group that includes members of MIT’s Department of Nuclear Science and Engineering (NSE) represents one of the first steps toward practical application of MOFs for nuclear fuel management, with novel findings on efficacy and radiation resistance, and an initial concept for implementation. One fundamental challenge is that the mix of gasses produced during fuel reprocessing is rich in oxygen and nitrogen, and existing methods tend to collect them as well as the part-per-million quantities of krypton that represent the highest risk. This reduces the purity of the collected 85Kr and increases the waste volume. Moreover, existing krypton extraction methods rely on costly and complex cryogenic processes. The group’s study, published in the journal Nature Communications, evaluated a series of ultra-microporous MOFs with different metal centers including zinc, cobalt, nickel, and iron, and found that a copper-containing crystal, SIFSIX-Cu, showed good promise. To harness its favorable combination of radiation stability and selective adsorption, while also minimizing the volume of waste, the team proposed a two-step treatment process, in which an initial bed of the material is used to adsorb xenon and carbon dioxide from the effluent gas mixture, after which the gas is transferred to a second bed which selectively adsorbs krypton but not nitrogen or oxygen. “If one day we want to treat the spent fuels, which in the U.S. are currently stored in pools and dry casks at the nuclear power plant sites, we need to handle the volatile radionuclides.” explains Ju Li, MIT’s Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering. “Physisorption of krypton and xenon is a good approach, and we were very happy to collaborate with this large team on the MOF approach.” MOFs have been seen as a possible solution for applications in many fields, but this research marks the first systematic study of their applicability in the nuclear sector, and the effectiveness of different metal centers on MOF radiation stability, notes Sameh K. Elsaidi, a research scientist at the U.S. Department of Energy’s National Energy and Technology Laboratory and the paper’s lead author. “There are already over 60,000 different MOFs, and more are being developed every day, so there are a lot to choose from,” says Elsaidi. “The selection of one for 85Kr separation during reprocessing is based on several essential criteria. During our long search for porous materials that can meet these criteria, we found that a class of microporous MOFs called SIFSIX-3-M can efficiently reduce the volume of nuclear waste by separating 85Kr in more pure form from the other nonradioactive gasses. However, in order to be useful for practical separation of 85Kr, these materials must be resistant to radiation under reprocessing conditions. “This is a first look at candidates that can meet the criteria. I feel very lucky to be working with Ju and [MIT NSE postdoc Ahmed Sami Helal] as we start to evaluate whether these materials can be used in the real world. This project was a very good example of how collaborative work can lead to better fundamental understanding, and there’s a lot down the road that we can do together,” adds Elsaidi. Helal notes, “Studying the effect of high-energy ionizing radiation, including β-rays and γ-rays, on the stability of MOFs is a very important factor in determining whether the MOFs can be used for capture of fission gasses from used fuel. This work is the first to investigate the radiolytic stability of MOFs at radiation doses relevant to practical Xe/Kr separation at fuel reprocessing plants.” Developing a practical adsorption process is a complex task, requiring capabilities from multiple disciplines including chemical engineering, materials science, and nuclear engineering. The research leveraged several specialized Institute resources, including the MIT gamma irradiation facility (managed by the MIT Radiation Protection Program) and the High Voltage Research Laboratory, which was used for beta irradiation measurements with assistance from Mitchell Galanek of the MIT Office of Environment, Health and Safety. Those efforts, in conjunction with X-ray diffraction studies and electronic structure modeling, “were fascinating and helped us learn a lot about MOFs and build our understanding of non-neutronic radiation resistance of this new class of materials,” says Li. “That could be useful in other applications in the future,” including detectors. In addition to MIT and the National Energy Technology Laboratory, collaborators on the project included the Pacific Northwest National Laboratory (Praveen Thallapally), the University of Pittsburgh (Mona Mohamed), and the University of South Florida (Brian Space and Tony Pham). Programmatic funding was provided by the U.S. Department of Energy’s Office of Nuclear Energy, with additional support from the National Science Foundation. Computational resources were made available via an XSEDE Grant and by the University of South Florida. Separation of 85Kr from spent nuclear fuel by a highly selective metal organic framework. Image: Mike Gipple/NETL Thu, 23 Jul 2020 15:18:29 -0400 MIT News Office The following letter was sent to the MIT community today by President L. Rafael Reif. To the members of the MIT community, I am delighted to share an important step in MIT’s ongoing efforts to take action against climate change. Thanks to the thoughtful leadership of Vice President for Research Maria Zuber, Associate Provost Richard Lester and a committee of 26 faculty leaders representing all five schools and the college, today we are committing to an ambitious new research effort called Climate Grand Challenges. MIT’s Plan for Action on Climate Change stressed the need for breakthrough innovations and underscored MIT’s responsibility to lead. Since then, the escalating climate crisis and lagging global response have only intensified the need for action. With this letter, we invite all principal investigators (PIs) from across MIT to help us define a new agenda of transformative research. The threat of climate change demands a host of interlocking solutions; to shape a research program worthy of MIT, we seek bold faculty proposals that address the most difficult problems in the field, problems whose solutions would make the most decisive difference. The focus will be on those hard questions where progress depends on advancing and applying frontier knowledge in the physical, life and social sciences, or advancing and applying cutting-edge technologies, or both; solutions may require the wisdom of many disciplines. Equally important will be to advance the humanistic and scientific understanding of how best to inspire 9 billion humans to adopt the technologies and behaviors the crisis demands. We encourage interested PIs to submit a letter of interest. A group of MIT faculty and outside experts will choose the most compelling – the five or six ideas that offer the most effective levers for rapid, large-scale change. MIT will then focus intensely on securing the funds for the work to succeed. To meet this great rolling emergency for the species, we are seeking and expecting big ideas for sharpening our understanding, combatting climate change itself and adapting constructively to its impacts. You can learn much more about the overall concept as well as specific deadlines and requirements here. This invitation is geared specifically for MIT PIs – but the climate problem deserves wholehearted attention from every one of us. Whatever your role, I encourage you to find ways to be part of the broad range of climate events, courses and research and other work already under way at MIT.  For decades, MIT students, staff, postdocs, faculty and alumni have poured their energy, insight and ingenuity into countless aspects of the climate problem; in this new work, your efforts are our inspiration and our springboard.  We will share next steps in the Climate Grand Challenges process later in the fall semester. Sincerely, L. Rafael Reif As the air cleared after lockdowns, solar installations in Delhi produced 8 percent more power, study shows. Wed, 22 Jul 2020 00:00:00 -0400 David L. Chandler | MIT News Office As the Covid-19 shutdowns and stay-at-home orders brought much of the world’s travel and commerce to a standstill, people around the world started noticing clearer skies as a result of lower levels of air pollution. Now, researchers have been able to demonstrate that those clearer skies had a measurable impact on the output from solar photovoltaic panels, leading to a more than 8 percent increase in the power output from installations in Delhi. While such an improved output was not unexpected, the researchers say this is the first study to demonstrate and quantify the impact of the reduced air pollution on solar output. The effect should apply to solar installations worldwide, but would normally be very difficult to measure against a background of natural variations in solar panel output caused by everything from clouds to dust on the panels. The extraordinary conditions triggered by the pandemic, with its sudden cessation of normal activities, combined with high-quality air-pollution data from one of the world’s smoggiest cities, afforded the opportunity to harness data from an unprecedented, unplanned natural experiment. The findings are reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, research scientist Ian Marius Peters, and three others in Singapore and Germany. The study was an extension of previous research the team has been conducting in Delhi for several years. The impetus for the work came after an unusual weather pattern in 2013 swept a concentrated plume of smoke from forest fires in Indonesia across a vast swath of Indonesia, Malaysia, and Singapore, where Peters, who had just arrived in the region, found “it was so bad that you couldn’t see the buildings on the other side of the street.” Since he was already doing research on solar photovoltaics, Peters decided to investigate what effects the air pollution was having on solar panel output. The team had good long-term data on both solar panel output and solar insolation, gathered at the same time by monitoring stations set up adjacent to the solar installations. They saw that during the 18-day-long haze event, the performance of some types of solar panels decreased, while others stayed the same or increased slightly. That distinction proved useful in teasing apart the effects of pollution from other variables that could be at play, such as weather conditions. Peters later learned that a high-quality, years-long record of actual measurements of fine particulate air pollution (particles less than 2.5 micrometers in size) had been collected every hour, year after year, at the U.S. Embassy in Delhi. That provided the necessary baseline for determining the actual effects of pollution on solar panel output; the researchers compared the air pollution data from the embassy with meteorological data on cloudiness and the solar irradiation data from the sensors. They identified a roughly 10 percent overall reduction in output from the solar installations in Delhi because of pollution – enough to make a significant dent in the facilities’ financial projections. To see how the Covid-19 shutdowns had affected the situation, they were able to use the mathematical tools they had developed, along with the embassy’s ongoing data collection, to see the impact of reductions in travel and factory operations. They compared the data from before and after India went into mandatory lockdown on March 24, and also compared this with data from the previous three years. Pollution levels were down by about 50 percent after the shutdown, they found. As a result, the total output from the solar panels was increased by 8.3 percent in late March, and by 5.9 percent in April, they calculated. “These deviations are much larger than the typical variations we have” within a year or from year to year, Peters says — three to four times greater. “So we can’t explain this with just fluctuations.” The amount of difference, he says, is roughly the difference between the expected performance of a solar panel in Houston versus one in Toronto. An 8 percent increase in output might not sound like much, Buonassisi says, but “the margins of profit are very small for these businesses.” If a solar company was expecting to get a 2 percent profit margin out of their expected 100 percent panel output, and suddenly they are getting 108 percent output, that means their margin has increased fivefold, from 2 percent to 10 percent, he points out. The findings provide real data on what can happen in the future as emissions are reduced globally, he says. “This is the first real quantitative evaluation where you almost have a switch that you can turn on and off for air pollution, and you can see the effect,” he says. “You have an opportunity to baseline these models with and without air pollution.” By doing so, he says, “it gives a glimpse into a world with significantly less air pollution.” It also demonstrates that the very act of increasing the usage of solar electricity, and thus displacing fossil-fuel generation that produces air pollution, makes those panels more efficient all the time. Putting solar panels on one’s house, he says, “is helping not only yourself, not only putting money in your pocket, but it’s also helping everybody else out there who already has solar panels installed, as well as everyone else who will install them over the next 20 years.” In a way, a rising tide of solar panels raises all solar panels. Though the focus was on Delhi, because the effects there are so strong and easy to detect, this effect “is true anywhere where you have some kind of air pollution. If you reduce it, it will have beneficial consequences for solar panels,” Peters says. Even so, not every claim of such effects is necessarily real, he says, and the details do matter. For example, clearer skies were also noted across much of Europe as a result of the shutdowns, and some news reports described exceptional output levels from solar farms in Germany and in the U.K. But the researchers say that just turned out to be a coincidence. “The air pollution levels in Germany and Great Britain are generally so low that most PV installations are not significantly affected by them,” Peters says. After checking the data, what contributed most to those high levels of solar output this spring, he says, turned out to be just “extremely nice weather,” which produced record numbers of sunlight hours. The research team included C. Brabec and J. Hauch at the Helmholtz-Institute Erlangen-Nuremberg for Renewable Energies, in Germany, where Peters also now works, and A. Nobre at Cleantech Solar in Singapore. The work was supported by the Bavarian State Government. Shutdowns in response to the Covid-19 pandemic have resulted in lowered air pollution levels around the world. Researchers at MIT, and in Germany and Singapore have found that this resulted in a significant increase in the output from solar photovoltaic installations in Delhi, normally one of the world’s smoggiest cities. Image: Jose-Luis Olivares, MIT How energy-intensive economies can survive and thrive as the globe ramps up climate action. Wed, 15 Jul 2020 13:40:01 -0400 Mark Dwortzan | MIT Joint Program on the Science and Policy of Global Change Today, Russia’s economy depends heavily upon its abundant fossil fuel resources. Russia is one of the world’s largest exporters of fossil fuels, and a number of its key exporting industries — including metals, chemicals, and fertilizers — draw on fossil resources. The nation also consumes fossil fuels at a relatively high rate; it’s the world’s fourth-largest emitter of carbon dioxide. As the world shifts away from fossil fuel production and consumption and toward low-carbon development aligned with the near- and long-term goals of the Paris Agreement, how might countries like Russia reshape their energy-intensive economies to avoid financial peril and capitalize on this clean energy transition? In a new study in the journal Climate Policy, researchers at the MIT Joint Program on the Science and Policy of Global Change and Russia’s National Research University Higher School of Economics assess the impacts on the Russian economy of the efforts of the main importers of Russian fossil fuels to comply with the Paris Agreement. The researchers project that expected climate-related actions by importers of Russia’s fossil fuels will lower demand for these resources considerably, thereby reducing the country’s GDP growth rate by nearly 0.5 percent between 2035 and 2050. The study also finds that the Paris Agreement will heighten Russia’s risks of facing market barriers for its exports of energy-intensive goods, and of lagging behind in developing increasingly popular low-carbon energy technologies. Using the Joint Program’s Economic Projection and Policy Analysis model, a multi-region, multi-sector model of the world economy, the researchers evaluated the impact on Russian energy exports and GDP of scenarios representing global climate policy ambition ranging from non-implementation of national Paris pledges to collective action aligned with keeping global warming well below 2 degrees Celsius. The bottom line: Global climate policies will make it impossible for Russia to sustain its current path of fossil fuel export-based development. To maintain and enhance its economic well-being, the study’s co-authors recommend that Russia both decarbonize and diversify its economy in alignment with climate goals. In short, by taxing fossil fuels (e.g., through a production tax or carbon tax), the country could redistribute that revenue to the development of human capital to boost other economic sectors (primarily manufacturing, services, agriculture, and food production), thereby making up for energy-sector losses due to global climate policies. The study projects that the resulting GDP increase could be on the order of 1-4 percent higher than it would be without diversification. “Many energy-exporting countries have tried to diversify their economies, but with limited success,” says Sergey Paltsev, deputy director of the MIT Joint Program, senior research scientist at the MIT Energy Initiative (MITEI) and director of the MIT Joint Program/MITEI Energy-at-Scale Center. “Our study quantifies the dynamics of efforts to achieve economic diversification in which reallocation of funds leads to higher labor productivity and economic growth — all while enabling more aggressive emissions reduction targets.”   The study was supported by the Basic Research Program of the National Research University Higher School of Economics and the MIT Skoltech Seed Fund Program. Human capital development in Russia through increased per-student expenditure could lead to long-term benefits in manufacturing, services, agriculture, food production, and other sectors. Seen here: Russian students from Tyumen State University. Photo courtesy of the United Nations Development Program. Researchers design an effective treatment for both exhaust and ambient air. Thu, 09 Jul 2020 15:25:01 -0400 Nancy W. Stauffer | MIT Energy Initiative An essential component of any climate change mitigation plan is cutting carbon dioxide (CO2) emissions from human activities. Some power plants now have CO2 capture equipment that grabs CO2 out of their exhaust. But those systems are each the size of a chemical plant, cost hundreds of millions of dollars, require a lot of energy to run, and work only on exhaust streams that contain high concentrations of CO2. In short, they’re not a solution for airplanes, home heating systems, or automobiles. To make matters worse, capturing CO2 emissions from all anthropogenic sources may not solve the climate problem. “Even if all those emitters stopped tomorrow morning, we would still have to do something about the amount of CO2 in the air if we’re going to restore preindustrial atmospheric levels at a rate relevant to humanity,” says Sahag Voskian SM ’15, PhD ’19, co-founder and chief technology officer at Verdox, Inc. And developing a technology that can capture the CO2 in the air is a particularly hard problem, in part because the CO2 occurs in such low concentrations. The CO2 capture challenge A key problem with CO2 capture is finding a “sorbent” that will pick up CO2 in a stream of gas and then release it so the sorbent is clean and ready for reuse and the released CO2 stream can be utilized or sent to a sequestration site for long-term storage. Research has mainly focused on sorbent materials present as small particles whose surfaces contain “active sites” that capture CO2 — a process called adsorption. When the system temperature is lowered (or pressure increased), CO2 adheres to the particle surfaces. When the temperature is raised (or pressure reduced), the CO2 is released. But achieving those temperature or pressure “swings” takes considerable energy, in part because it requires treating the whole mixture, not just the CO2-bearing sorbent. In 2015, Voskian, then a PhD candidate in chemical engineering, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and co-director of the MIT Energy Initiative’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage, began to take a closer look at the temperature- and pressure-swing approach. “We wondered if we could get by with using only a renewable resource — like renewably sourced electricity — rather than heat or pressure,” says Hatton. Using electricity to elicit the chemical reactions needed for CO2 capture and conversion had been studied for several decades, but Hatton and Voskian had a new idea about how to engineer a more efficient adsorption device. Their work focuses on a special class of molecules called quinones. When quinone molecules are forced to take on extra electrons — which means they’re negatively charged — they have a high chemical affinity for CO2 molecules and snag any that pass. When the extra electrons are removed from the quinone molecules, the quinone’s chemical affinity for CO2 instantly disappears, and the molecules release the captured CO2.  Others have investigated the use of quinones and an electrolyte in a variety of electrochemical devices. In most cases, the devices involve two electrodes — a negative one where the dissolved quinone is activated for CO2 capture, and a positive one where it’s deactivated for CO2 release. But moving the solution from one electrode to the other requires complex flow and pumping systems that are large and take up considerable space, limiting where the devices can be used.  As an alternative, Hatton and Voskian decided to use the quinone as a solid electrode and — by applying what Hatton calls “a small change in voltage” — vary the electrical charge of the electrode itself to activate and deactivate the quinone. In such a setup, there would be no need to pump fluids around or to raise and lower the temperature or pressure, and the CO2 would end up as an easy-to-separate attachment on the solid quinone electrode. They deemed their concept “electro-swing adsorption.” The electro-swing cell To put their concept into practice, the researchers designed the electrochemical cell shown in the two diagrams in Figure 1 in the slideshow above. To maximize exposure, they put two quinone electrodes on the outside of the cell, thereby doubling its geometric capacity for CO2 capture. To switch the quinone on and off, they needed a component that would supply electrons and then take them back. For that job, they used a single ferrocene electrode, sandwiched between the two quinone electrodes but isolated from them by electrolyte membrane separators to prevent short circuits. They connected both quinone electrodes to the ferrocene electrode using the circuit of wires at the top, with a power source along the way. The power source creates a voltage that causes electrons to flow from the ferrocene to the quinone through the wires. The quinone is now negatively charged. When CO2-containing air or exhaust is blown past these electrodes, the quinone will capture the CO2 molecules until all the active sites on its surface are filled up. During the discharge cycle, the direction of the voltage on the cell is reversed, and electrons flow from the quinone back to the ferrocene. The quinone is no longer negatively charged, so it has no chemical affinity for CO2. The CO2 molecules are released and swept out of the system by a stream of purge gas for subsequent use or disposal. The quinone is now regenerated and ready to capture more CO2. Two additional components are key to successful operation. First is an electrolyte, in this case a liquid salt, that moistens the cell with positive and negative ions (electrically charged particles). Since electrons only flow through the external wires, those charged ions must travel within the cell from one electrode to the other to close the circuit for continued operation. The second special ingredient is carbon nanotubes. In the electrodes, the quinone and ferrocene are both present as coatings on the surfaces of carbon nanotubes. Nanotubes are both strong and highly conductive, so they provide good support and serve as an efficient conduit for electrons traveling into and out of the quinone and ferrocene. To fabricate a cell, researchers first synthesize a quinone- or ferrocene-based polymer, specifically, polyanthraquinone or polyvinylferrocene. They then make an “ink” by combining the polymer with carbon nanotubes in a solvent. The polymer immediately wraps around the nanotubes, connecting with them on a fundamental level. To make the electrode, they use a non-woven carbon fiber mat as a substrate. They dip the mat into the ink, allow it to dry slowly, and then dip it again, repeating the procedure until they’ve built up a uniform coating of the composite on the substrate. The result of the process is a porous mesh that provides a large surface area of active sites and easy pathways for CO2 molecules to move in and out. Once the researchers have prepared the quinone and ferrocene electrodes, they assemble the electrochemical cell by laminating the pieces together in the correct order — the quinone electrode, the electrolyte separator, the ferrocene electrode, another separator, and the second quinone electrode. Finally, they moisten the assembled cell with their liquid salt electrolyte. Experimental results To test the behavior of their system, the researchers placed a single electrochemical cell inside a custom-made, sealed box and wired it for electricity input. They then cycled the voltage and measured the key responses and capabilities of the device. The simultaneous trends in charge density put into the cell and CO2 adsorption per mole of quinone showed that when the quinone electrode is negatively charged, the amount of CO2 adsorbed goes up. And when that charge is reversed, CO2 adsorption declines. For experiments under more realistic conditions, the researchers also fabricated full capture units — open-ended modules in which a few cells were lined up, one beside the other, with gaps between them where CO2-containing gases could travel, passing the quinone surfaces of adjacent cells. In both experimental systems, the researchers ran tests using inlet streams with CO2 concentrations ranging from 10 percent down to 0.6 percent. The former is typical of power plant exhaust, the latter closer to concentrations in ambient indoor air. Regardless of the concentration, the efficiency of capture was essentially constant at about 90 percent. (An efficiency of 100 percent would mean that one molecule of CO2 had been captured for every electron transferred — an outcome that Hatton calls “highly unlikely” because other parasitic processes could be going on simultaneously.) The system used about 1 gigajoule of energy per ton of CO2 captured. Other methods consume between 1 and 10 gigajoules per ton, depending on the CO2 concentration of the incoming gases. Finally, the system was exceptionally durable. Over more than 7,000 charge-discharge cycles, its CO2 capture capacity dropped by only 30 percent — a loss of capacity that can readily be overcome with further refinements in the electrode preparation, say the researchers.  The remarkable performance of their system stems from what Voskian calls the “binary nature of the affinity of quinone to CO2.” The quinone has either a high affinity or no affinity at all. “The result of that binary affinity is that our system should be equally effective at treating fossil fuel combustion flue gases and confined or ambient air,” he says.  Practical applications The experimental results confirm that the electro-swing device should be applicable in many situations. The device is compact and flexible; it operates at room temperature and normal air pressure; and it requires no large-scale, expensive ancillary equipment — only the direct current power source. Its simple design should enable “plug-and-play” installation in many processes, say the researchers. It could, for example, be retrofitted in sealed buildings to remove CO2. In most sealed buildings, ventilation systems bring in fresh outdoor air to dilute the CO2 concentration indoors. “But making frequent air exchanges with the outside requires a lot of energy to condition the incoming air,” says Hatton. “Removing the CO2 indoors would reduce the number of exchanges needed.” The result could be large energy savings. Similarly, the system could be used in confined spaces where air exchange is impossible — for example, in submarines, spacecraft, and aircraft — to ensure that occupants aren’t breathing too much CO2. The electro-swing system could also be teamed up with renewable sources, such as solar and wind farms, and even rooftop solar panels. Such sources sometimes generate more electricity than is needed on the power grid. Instead of shutting them off, the excess electricity could be used to run a CO2 capture plant. The researchers have also developed a concept for using their system at power plants and other facilities that generate a continuous flow of exhaust containing CO2. At such sites, pairs of units would work in parallel. “One is emptying the pure CO2 that it captured, while the other is capturing more CO2,” explains Voskian. “And then you swap them.” A system of valves would switch the airflow to the freshly emptied unit, while a purge gas would flow through the full unit, carrying the CO2 out into a separate chamber. The captured CO2 could be chemically processed into fuels or simply compressed and sent underground for long-term disposal. If the purge gas were also CO2, the result would be a steady stream of pure CO2 that soft-drink makers could use for carbonating drinks and farmers could use for feeding plants in greenhouses. Indeed, rather than burning fossil fuels to get CO2, such users could employ an electro-swing unit to generate their own CO2 while simultaneously removing CO2 from the air.  Costs and scale-up The researchers haven’t yet published a full technoeconomic analysis, but they project capital plus operating costs at $50 to $100 per ton of CO2 captured. That range is in line with costs using other, less-flexible carbon capture systems. Methods for fabricating the electro-swing cells are also manufacturing-friendly: The electrodes can be made using standard chemical processing methods and assembled using a roll-to-roll process similar to a printing press.  And the system can be scaled up as needed. According to Voskian, it should scale linearly: “If you need 10 times more capture capacity, you just manufacture 10 times more electrodes.” Together, he and Hatton, along with Brian M. Baynes PhD ’04, have formed a company called Verdox, and they’re planning to demonstrate that ease of scale-up by developing a pilot plant within the next few years. This research was supported by an MIT Energy Initiative (MITEI) Seed Fund grant and by Eni S.p.A. through MITEI. Sahag Voskian was an Eni-MIT Energy Fellow in 2016-17 and 2017-18. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Sahag Voskian SM ’15, PhD ’19 (left) and Professor T. Alan Hatton have developed an electrochemical cell that can capture and release carbon dioxide with just a small change in voltage. Photo: Stuart Darsch MIT Environmental Solutions Initiative and multinational mining company Vale bring sustainability education to young engineering professionals in Brazil. Tue, 07 Jul 2020 14:15:00 -0400 Aaron Krol | Environmental Solutions Initiative For the mining industry, efforts to achieve sustainability are moving from local to global. In the past, mining companies focused sustainability initiatives more on their social license to operate — treating workers fairly and operating safe and healthy facilities. However, concerns over climate change have put mining operations and supply chains in the global spotlight, leading to various carbon-neutral promises by mining companies in recent months. Heading in this direction is Vale, a global mining company and the world’s largest iron ore and nickel producer. It is a publicly traded company headquartered in Brazil with operations in 30 countries. In the wake of two major tailings dam failures, as well as continued pressure to reduce carbon emissions, Vale has committed to spend $2 billion to cut both its direct and indirect carbon emissions 33 percent by 2030. To meet these ambitions, a broad cultural change is required — and MIT is one of the partners invited by Vale to help with the challenge. Stephen Potter, global strategy director for Vale, knows that local understanding of sustainability is fundamental to reaching its goals. “We need to attract the best and brightest young people to work in the Brazilian mining sector, and young people want to work for companies with a strong sustainability program,” Potter says. To that end, Vale created the Mining Innovation in a New Environment (MINE) program in 2019, in collaboration with the MIT Environmental Solutions Initiative (ESI); the Imperial College London Consultants; The Bakery, a start-up accelerator; and SENAI CIMATEC, a Brazilian technical institute. The program provides classes and sustainability training to young professionals with degrees relevant to mining engineering. Students in the MINE program get hands-on experience working with a real challenge the company is facing, while also expanding their personal leadership and technical skills. “Instilling young people with an entrepreneurial and innovative mindset is a core tenet of this program, whether they ultimately work at Vale or elsewhere,” says Potter. ESI’s role in the MINE program is to provide expert perspectives on sustainability that students wouldn’t receive in ordinary engineering training courses. “MIT offers a unique blend of scientific and engineering expertise, as well as entrepreneurial spirit, that can inspire young professionals in the Brazilian mining sector to work toward sustainable practices,” says ESI Director John Fernández. Drawing on a deep, multidisciplinary portfolio of MIT research on the extraction and processing of metals and minerals, MIT can support the deployment of innovative technologies and environmentally and socially conscious business strategies throughout a global supply chain. Since December 2019, the inaugural class of 30 MINE students has had a whirlwind of experiences. To kick off the program, MIT offered six weeks of online training, building up to an immersive training session in Janary 2020. Hosted by SENAI CIMATEC at their academic campus in Salvador, Brazil, the event featured in-person sessions with five MIT faculty: professsors Jessika Trancik, Roberto Rigobon, Andrew Whittle, Rafi Segal, and Principal Research Scientist Randolph Kirchain. The two-week event was coordinated by Suzanne Greene, who leads the MINE program for ESI as part of her role with the MIT Sustainable Supply Chains program. “What I loved about this program,” Greene says, “was the breadth of topics MIT’s lecturers were able to offer students. Students could take a deep dive on clean energy technology one day and tailings dams the next.” The courses were designed to give the students a common grounding in sustainability concepts and management tools to prepare them for the next phase of the program,  a hands-on research project within Vale. Immersion projects in this next phase align Vale’s core sustainability strategies around worker and infrastructure safety and the low-carbon energy transition. “This project is a great opportunity for Vale to reconfigure their supply chain and also improve the social and environmental performance,” says Marina Mattos, a postdoc working with ESI in the Metals, Minerals, and the Environment program. “As a Brazilian, I’m thrilled to be part of the MIT team helping to develop next-generation engineers with the values, attitudes, and skills necessary to understand and address challenges of the mining industry.” “We expect this program will lead to interest from other extractive companies, not only for education, but for research as well,” adds Greene. “This is just the beginning.” MINE Program students and other program participants at a hackathon in Salvador, Brazil, are pictured here before the Covid-19 pandemic interrupted such gatherings. Over a seven-year period, decline in PV costs outpaced decline in value; by 2017, market, health, and climate benefits outweighed the cost of PV systems. Tue, 23 Jun 2020 14:15:01 -0400 Nancy Stauffer | MIT Energy Initiative Over the past decade, the cost of solar photovoltaic (PV) arrays has fallen rapidly. But at the same time, the value of PV power has declined in areas that have installed significant PV generating capacity. Operators of utility-scale PV systems have seen electricity prices drop as more PV generators come online. Over the same time period, many coal-fired power plants were required to install emissions-control systems, resulting in declines in air pollution nationally and regionally. The result has been improved public health — but also a decrease in the potential health benefits from offsetting coal generation with PV generation. Given those competing trends, do the benefits of PV generation outweigh the costs? Answering that question requires balancing the up-front capital costs against the lifetime benefits of a PV system. Determining the former is fairly straightforward. But assessing the latter is challenging because the benefits differ across time and place. “The differences aren’t just due to variation in the amount of sunlight a given location receives throughout the year,” says Patrick R. Brown PhD ’16, a postdoc at the MIT Energy Initiative. “They’re also due to variability in electricity prices and pollutant emissions.” The drop in the price paid for utility-scale PV power stems in part from how electricity is bought and sold on wholesale electricity markets. On the “day-ahead” market, generators and customers submit bids specifying how much they’ll sell or buy at various price levels at a given hour on the following day. The lowest-cost generators are chosen first. Since the variable operating cost of PV systems is near zero, they’re almost always chosen, taking the place of the most expensive generator then in the lineup. The price paid to every selected generator is set by the highest-cost operator on the system, so as more PV power comes on, more high-cost generators come off, and the price drops for everyone. As a result, in the middle of the day, when solar is generating the most, prices paid to electricity generators are at their lowest. Brown notes that some generators may even bid negative prices. “They’re effectively paying consumers to take their power to ensure that they are dispatched,” he explains. For example, inflexible coal and nuclear plants may bid negative prices to avoid frequent shutdown and startup events that would result in extra fuel and maintenance costs. Renewable generators may also bid negative prices to obtain larger subsidies that are rewarded based on production.  Health benefits also differ over time and place. The health effects of deploying PV power are greater in a heavily populated area that relies on coal power than in a less-populated region that has access to plenty of clean hydropower or wind. And the local health benefits of PV power can be higher when there’s congestion on transmission lines that leaves a region stuck with whatever high-polluting sources are available nearby. The social costs of air pollution are largely “externalized,” that is, they are mostly unaccounted for in electricity markets. But they can be quantified using statistical methods, so health benefits resulting from reduced emissions can be incorporated when assessing the cost-competitiveness of PV generation. The contribution of fossil-fueled generators to climate change is another externality not accounted for by most electricity markets. Some U.S. markets, particularly in California and the Northeast, have implemented cap-and-trade programs, but the carbon dioxide (CO2) prices in those markets are much lower than estimates of the social cost of CO2, and other markets don’t price carbon at all. A full accounting of the benefits of PV power thus requires determining the CO2 emissions displaced by PV generation and then multiplying that value by a uniform carbon price representing the damage that those emissions would have caused. Calculating PV costs and benefits To examine the changing value of solar power, Brown and his colleague Francis M. O’Sullivan, the senior vice president of strategy at Ørsted Onshore North America and a senior lecturer at the MIT Sloan School of Management, developed a methodology to assess the costs and benefits of PV power across the U.S. power grid annually from 2010 to 2017.  The researchers focused on six “independent system operators” (ISOs) in California, Texas, the Midwest, the Mid-Atlantic, New York, and New England. Each ISO sets electricity prices at hundreds of “pricing nodes” along the transmission network in their region. The researchers performed analyses at more than 10,000 of those pricing nodes. For each node, they simulated the operation of a utility-scale PV array that tilts to follow the sun throughout the day. They calculated how much electricity it would generate and the benefits that each kilowatt would provide, factoring in energy and “capacity” revenues as well as avoided health and climate change costs associated with the displacement of fossil fuel emissions. (Capacity revenues are paid to generators for being available to deliver electricity at times of peak demand.) They focused on emissions of CO2, which contributes to climate change, and of nitrogen oxides (NOx), sulfur dioxide (SO2), and particulate matter called PM2.5 — fine particles that can cause serious health problems and can be emitted or formed in the atmosphere from NOx and SO2. The results of the analysis showed that the wholesale energy value of PV generation varied significantly from place to place, even within the region of a given ISO. For example, in New York City and Long Island, where population density is high and adding transmission lines is difficult, the market value of solar was at times 50 percent higher than across the state as a whole.  The public health benefits associated with SO2, NOx, and PM2.5 emissions reductions declined over the study period but were still substantial in 2017. Monetizing the health benefits of PV generation in 2017 would add almost 75 percent to energy revenues in the Midwest and New York and fully 100 percent in the Mid-Atlantic, thanks to the large amount of coal generation in the Midwest and Mid-Atlantic and the high population density on the Eastern Seaboard.  Based on the calculated energy and capacity revenues and health and climate benefits for 2017, the researchers asked: Given that combination of private and public benefits, what upfront PV system cost would be needed to make the PV installation “break even” over its lifetime, assuming that grid conditions in that year persist for the life of the installation? In other words, says Brown, “At what capital cost would an investment in a PV system be paid back in benefits over the lifetime of the array?”  Assuming 2017 values for energy and capacity market revenues alone, an unsubsidized PV investment at 2017 costs doesn’t break even. Add in the health benefit, and PV breaks even at 30 percent of the pricing nodes modeled. Assuming a carbon price of $50 per ton, the investment breaks even at about 70 percent of the nodes, and with a carbon price of $100 per ton (which is still less than the price estimated to be needed to limit global temperature rise to under 2 degrees Celsius), PV breaks even at all of the modeled nodes.  That wasn’t the case just two years earlier: At 2015 PV costs, PV would only have broken even in 2017 at about 65 percent of the nodes counting market revenues, health benefits, and a $100 per ton carbon price. “Since 2010, solar has gone from one of the most expensive sources of electricity to one of the cheapest, and it now breaks even across the majority of the U.S. when considering the full slate of values that it provides,” says Brown.  Based on their findings, the researchers conclude that the decline in PV costs over the studied period outpaced the decline in value, such that in 2017 the market, health, and climate benefits outweighed the cost of PV systems at the majority of locations modeled. “So the amount of solar that’s competitive is still increasing year by year,” says Brown.  The findings underscore the importance of considering health and climate benefits as well as market revenues. “If you’re going to add another megawatt of PV power, it’s best to put it where it’ll make the most difference, not only in terms of revenues but also health and CO2,” says Brown.  Unfortunately, today’s policies don’t reward that behavior. Some states do provide renewable energy subsidies for solar investments, but they reward generation equally everywhere. Yet in states such as New York, the public health benefits would have been far higher at some nodes than at others. State-level or regional reward mechanisms could be tailored to reflect such variation in node-to-node benefits of PV generation, providing incentives for installing PV systems where they’ll be most valuable. Providing time-varying price signals (including the cost of emissions) not only to utility-scale generators, but also to residential and commercial electricity generators and customers, would similarly guide PV investment to areas where it provides the most benefit.  Time-shifting PV output to maximize revenues  The analysis provides some guidance that might help would-be PV installers maximize their revenues. For example, it identifies certain “hot spots” where PV generation is especially valuable. At some high-electricity-demand nodes along the East Coast, for instance, persistent grid congestion has meant that the projected revenue of a PV generator has been high for more than a decade. The analysis also shows that the sunniest site may not always be the most profitable choice. A PV system in Texas would generate about 20 percent more power than one in the Northeast, yet energy revenues were greater at nodes in the Northeast than in Texas in some of the years analyzed.  To help potential PV owners maximize their future revenues, Brown and O’Sullivan performed a follow-on study focusing on ways to shift the output of PV arrays to align with times of higher prices on the wholesale market. For this analysis, they considered the value of solar on the day-ahead market and also on the “real-time market,” which dispatches generators to correct for discrepancies between supply and demand. They explored three options for shaping the output of PV generators, with a focus on the California real-time market in 2017, when high PV penetration led to a large reduction in midday prices compared to morning and evening prices. Curtailing output when prices are negative: During negative-price hours, a PV operator can simply turn off generation. In California in 2017, curtailment would have increased revenues by 9 percent on the real-time market compared to “must-run” operation. Changing the orientation of “fixed-tilt” (stationary) solar panels: The general rule of thumb in the Northern Hemisphere is to orient solar panels toward the south, maximizing production over the year. But peak production then occurs at about noon, when electricity prices in markets with high solar penetration are at their lowest. Pointing panels toward the west moves generation further into the afternoon. On the California real-time market in 2017, optimizing the orientation would have increased revenues by 13 percent, or 20 percent in conjunction with curtailment. Using 1-axis tracking: For larger utility-scale installations, solar panels are frequently installed on automatic solar trackers, rotating throughout the day from east in the morning to west in the evening. Using such 1-axis tracking on the California system in 2017 would have increased revenues by 32 percent over a fixed-tilt installation, and using tracking plus curtailment would have increased revenues by 42 percent. The researchers were surprised to see how much the optimal orientation changed in California over the period of their study. “In 2010, the best orientation for a fixed array was about 10 degrees west of south,” says Brown. “In 2017, it’s about 55 degrees west of south.” That adjustment is due to changes in market prices that accompany significant growth in PV generation — changes that will occur in other regions as they start to ramp up their solar generation. The researchers stress that conditions are constantly changing on power grids and electricity markets. With that in mind, they made their database and computer code openly available so that others can readily use them to calculate updated estimates of the net benefits of PV power and other distributed energy resources. They also emphasize the importance of getting time-varying prices to all market participants and of adapting installation and dispatch strategies to changing power system conditions. A law set to take effect in California in 2020 will require all new homes to have solar panels. Installing the usual south-facing panels with uncurtailable output could further saturate the electricity market at times when other PV installations are already generating. “If new rooftop arrays instead use west-facing panels that can be switched off during negative price times, it’s better for the whole system,” says Brown. “Rather than just adding more solar at times when the price is already low and the electricity mix is already clean, the new PV installations would displace expensive and dirty gas generators in the evening. Enabling that outcome is a win all around.” Patrick Brown and this research were supported by a U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Postdoctoral Research Award through the EERE Solar Energy Technologies Office. The computer code and data repositories are available here and here. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Utility-scale photovoltaic arrays are an economic investment across most of the United States when health and climate benefits are taken into account, concludes an analysis by MITEI postdoc Patrick Brown and Senior Lecturer Francis O’Sullivan. Their results show the importance of providing accurate price signals to generators and consumers and of adopting policies that reward installation of solar arrays where they will bring the most benefit. Photo courtesy of SunEnergy1. “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology.” Mon, 15 Jun 2020 14:10:01 -0400 Kathryn M. O’Neill | MIT Energy Initiative Joel Jean PhD ’17 spent two years working on The Future of Solar Energy, a report published by the MIT Energy Initiative (MITEI) in 2015. Today, he is striving to create that future as CEO of Swift Solar, a startup that is developing lightweight solar panels based on perovskite semiconductors. It hasn’t been a straight path, but Jean says his motivation — one he shares with his five co-founders — is the drive to address climate change. “The whole world is finally starting to see the threat of climate change and that there are many benefits to clean energy. That’s why we see such huge potential for new energy technologies,” he says. Max Hoerantner, co-founder and Swift Solar’s vice president of engineering, agrees. “It’s highly motivating to have the opportunity to put a dent into the climate change crisis with the technology that we’ve developed during our PhDs and postdocs.” The company’s international team of founders — from the Netherlands, Austria, Australia, the United Kingdom, and the United States — has developed a product with the potential to greatly increase the use of solar power: a very lightweight, super-efficient, inexpensive, and scalable solar cell. Jean and Hoerantner also have experience building a solar research team, gained working at GridEdge Solar, an interdisciplinary MIT research program that works toward scalable solar and is funded by the Tata Trusts and run out of MITEI’s Tata Center for Technology and Design. “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology,” says Vladimir Bulović, the Fariborz Maseeh (1990) Professor of Emerging Technology in MIT’s Department of Electrical Engineering and Computer Science, director of MIT.nano, and a science advisor for Swift Solar. Tandem photovoltaics The product begins with perovskites — a class of materials that are cheap, abundant, and great at absorbing and emitting light, making them good semiconductors for solar energy conversion. Using perovskites for solar generation took off about 10 years ago because the materials can be much more efficient at converting sunlight to electricity than the crystalline silicon typically used in solar panels today. They are also lightweight and flexible, whereas crystalline silicon is so brittle it needs to be protected by rigid glass, making most solar panels today about as large and heavy as a patio door. Many researchers and entrepreneurs have rushed to capitalize on those advantages, but Swift Solar has two core technologies that its founders see as their competitive edge. First, they are using two layers of perovskites in tandem to boost efficiency. “We’re putting two perovskite solar cells stacked on top of each other, each absorbing different parts of the spectrum,” Hoerantner says. Second, Swift Solar employs a proprietary scalable deposition process to create its perovskite films, which drives down manufacturing costs. “We’re the only company focusing on high-efficiency all-perovskite tandems. They’re hard to make, but we believe that’s where the market is ultimately going to go,” Jean says. “Our technologies enable much cheaper and more ubiquitous solar power through cheaper production, reduced installation costs, and more power per unit area,” says Sam Stranks, co-founder and lead scientific advisor for Swift Solar as well as an assistant professor in the Department of Chemical Engineering and Biotechnology at the University of Cambridge in the United Kingdom. “Other commercial solar photovoltaic technologies can do one or the other [providing either high power or light weight and flexibility], but not both.” Bulović says technology isn’t the only reason he expects the company to make a positive impact on the energy sector. “The success of a startup is initiated by the quality of the first technical ideas, but is sustained by the quality of the team that builds and grows the technology,” he says. “Swift Solar’s team is extraordinary.” Indeed, Swift Solar’s six co-founders together have six PhDs, four Forbes 30 Under 30 fellowships, and more than 80,000 citations. Four of them — Tomas Leijtens, Giles Eperon, Hoerantner, and Stranks — earned their doctorates at Oxford University in the United Kingdom, working with one of the pioneers of perovskite photovoltaics, Professor Henry Snaith. Stranks then came to MIT to work with Bulović, who is also widely recognized as a leader in next-generation photovoltaics and an experienced entrepreneur. (Bulović is a co-inventor of some of the patents the business is licensing from MIT.) Stranks met Jean at MIT, where Hoerantner later completed a postdoc working at GridEdge Solar. And the sixth co-founder, Kevin Bush, completed his PhD at Stanford University, where Leijtens did a postdoc with Professor Michael McGehee, another leading perovskite researcher and advisor to Swift. What ultimately drew them all together was the desire to address climate change. “We were all independently thinking about how we could have an impact on climate change using solar technology, and a startup seemed like the only real direction that could have an impact at the scale the climate demands,” Jean says. The team first met in a Google Hangouts session spanning three time zones in early 2016. Swift Solar was officially launched in November 2017. MITEI study Interestingly, Jean says it was his work on The Future of Solar Energy — rather than his work in the lab — that most contributed to his role in the founding of Swift Solar. The study team of more than 30 experts, including Jean and Bulović, investigated the potential for expanding solar generating capacity to the multi-terawatt scale by mid-century. They determined that the main goal of U.S. solar policy should be to build the foundation for a massive scale-up of solar generation over the next few decades. “I worked on quantum dot and organic solar cells for most of my PhD, but I also spent a lot of time looking at energy policy and economics, talking to entrepreneurs, and thinking about what it would take to succeed in tomorrow’s solar market. That made me less wedded to a single technology,” Jean says. Jean’s work on the study led to a much cited publication, “Pathways for Solar Photovoltaics” in Energy & Environmental Science (2015), and to his founding leadership role with GridEdge Solar. “Technical advancements and insights gained in this program helped launch Swift Solar as a hub for novel lightweight solar technology,” Bulović says. Swift Solar has also benefited from MIT’s entrepreneurial ecosystem, Jean says, noting that he took 15.366 (MIT Energy Ventures), a class on founding startups, and got assistance from the Venture Mentoring Service. “There were a lot of experiences like that that have really informed where we’re going as a company,” he says. Stranks adds, “MIT provided a thriving environment for exploring commercialization ideas in parallel to our tech development. Very few places could combine both so dynamically.” Swift Solar raised its first seed round of funding in 2018 and moved to the Bay Area of California last summer after incubating for a year at the U.S. Department of Energy’s National Renewable Energy Laboratory in Golden, Colorado. The team is now working to develop its manufacturing processes so that it can scale its technology up from the lab to the marketplace. The founders say their first goal is to develop specialized high-performance products for applications that require high efficiency and light weight, such as unmanned aerial vehicles and other mobile applications. “Wherever there is a need for solar energy and lightweight panels that can be deployed in a flexible way, our products will find a good use,” Hoerantner says. Scaling up will take time, but team members say the high stakes associated with climate change make all the effort worthwhile. “My vision is that we will be able to grow quickly and efficiently to realize our first products within the next two years, and to supply panels for rooftop and utility-scale solar applications in the longer term, helping the world rapidly transform to an electrified, low-carbon future,” Stranks says. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative. Joel Jean PhD ’17, co-founder of Swift Solar, stands in front of the company’s sign at its permanent location in San Carlos, California. Photo courtesy of Joel Jean. Using 3D fabrication, researchers develop novel nuclear materials that optimize both accident tolerance and performance. Thu, 11 Jun 2020 15:50:01 -0400 Leda Zimmerman | Department of Nuclear Science and Engineering In 2011 the nuclear energy industry faced one of its greatest challenges. The disabling of three Fukushima Daiichi nuclear reactors in the wake of an earthquake-triggered tsunami sparked a global race for solutions to improve nuclear safety — a race focused on accident-tolerant fuel (ATF) to avert future reactor breakdowns. Researchers in the United States rose to the challenge, among them Koroush Shirvan, assistant professor in the MIT Department of Nuclear Science and Engineering. After a series of studies on ATF concepts that might be deployed in the near term in collaboration with the nuclear industry, Shirvan has seized on an innovative nuclear fuel concept that addresses key safety issues while offering potential improvements in reactor performance. “By applying an integrated, system-level multidisciplinary approach, we accelerated R&D and now have a new design and demonstration results after just a few years,” says Shirvan. He describes this nuclear technology in a paper published in the April 15 Applied Energy — a special edition of the journal devoted to a conference highlighting significant research findings. His coauthors are nuclear science and engineering postdoc and fellow Wei Li, and Joseph Pegna and Shay Harrison, from high-performance fibers and powders manufacturing firm Free Form Fibers. The team’s research was funded by a series of Department of Energy Small Business Innovation Research grants. Improving containment From the start, Shirvan’s team sought both fuel and cladding alternatives. “In Fukushima, there were hydrogen explosions because of interactions between the conventional zirconium-based fuel cladding (the outer layer of fuel rods) and high temperature steam produced when the safety system failed and coolant water heated up,” he says. Hydrogen leaked out of the core and detonated in the reactor building itself. “Our goal was to come up with fuels that can last longer during potential heat-up events and reactor cladding materials that won’t generate much combustible hydrogen as quickly as zirconium.” The breakthroughs described in the paper flowed from collaboration with Free Form Fibers, a company co-founded by former Stanford University materials scientist Pegna, a pioneer in additive manufacturing. The firm approached Shirvan several years ago with an idea for using their patented 3D laser printing technique to create a new fuel. “They proposed using chemical vapor deposition (CVD) to create fuel fibers in which protective material coats each fiber, layer by layer,” says Shirvan. “This enables the fuel to serve as its own containment.” The concept was to pack these CVD-fabricated, cylindrically shaped fuel particles into a bundle that fits into a typical fuel rod, and replace the conventional zirconium fuel rod cladding with silicon carbide composite, to slow down hydrogen generation. This would hypothetically enable “retrofitting” current nuclear plants with new and safer materials. “The U.S. and other nations with nuclear power want to avoid shutting down current reactors, and are looking for replacement fuels and materials that can be certified in a reasonable length of time,” says Shirvan.  Around the world there are around 450 commercial reactors (nearly 100 in the United States), and the great preponderance of these are cooled by water. From concept to demonstration While current uranium dioxide fuels are typically cylindrical pellets approximately 1 centimeter in diameter and height, Free Form Fibers devised a ceramic fuel fabricated through their patented CVD technology resembling thin cylindrical fibers. But Free Form Fiber’s basic notion required some significant tweaking, says Shirvan. And he had ideas about specific geometries, dimensions, and fuel material “that would give the concept the highest potential to be feasible.” One of his critical contributions involves a fabrication scheme that seems drawn straight from a medical spa. “Imagine a bald spot on your head, where you want to grow hair,” Shirvan says. “You focus the laser depositing materials on that bald spot, and the cylindrical hair starts growing straight up in that region, and you continue making hairs until you have a bundle covering the entire region.” The base of each “hair” is made out of uranium nitride fuel, which is coated with a soft buffer layer made out of porous carbon, followed by denser carbon, followed by silicon carbide — a material with a very high melting point. Spaces are filled by more silicon carbide, with bundles stacked on top of each other vertically, then placed inside a cladding also made out of silicon carbide or other ATF material. Free Form Fibers personnel were able to demonstrate this 3D laser fabrication technique using uranium at the Materials and Chemistry Laboratory, Inc. (MCLinc) at Oak Ridge, Tennessee — a rare opportunity in the field to springboard their concept toward manufacturing reality. These uranium and carbon fuel cylinders wrapped in silicon carbide, stuffed into a silicon carbide-clad fuel rod, can theoretically survive temperatures up to 1,800 degrees Celsius, which might enable nuclear reactors to run at higher power levels. There are other robust fuels that reduce the chance of releasing fission products at high temperatures using similar layered fabrication schemes, such as Triso (TRi-structural ISOtropic particles). But these fuels are mainly intended for advanced reactors since they are limited by fuel packing guidelines in conventional water-cooled reactors. And, notes Shirvan, “The 3D technique allows us to pack more fibers into our bundles, meaning higher fuel per volume.” Greater fuel density means more efficient use of fuel. “The group’s research definitely advances the search for accident-tolerant fuel, by adding an additional barrier to fission products while also maintaining high fuel density,” says Nicholas Brown, associate professor in the Department of Nuclear Engineering at the University of Tennesee at Knoxville, who was not involved in this study. “Their core innovation is coming up with a manufacturable fuel concept that adds a layer of defense against the release of radioactive fission products in an accident.” Game-changing possibilities While this silicon carbide-coated fuel and cladding could lead to improvement in both safety and performance for current-generation, water-cooled commercial reactors, Shirvan does not see adoption of their nuclear innovations in this domain in the immediate future. “Nuclear plants are currently under economic hardship, and it’s unlikely that they will invest in these new technologies unless it will result in direct savings,” he says. But this concept remains extremely viable for other applications. “Based on preliminary results, our fuel is potentially game changing for microreactors, nuclear space propulsion, and other advanced reactors that need robust, accident-tolerant fuels,” says Shirvan. With commercial vendors in these areas already knocking at the door, the research group is gearing up for radiation studies of the fuel in MIT’s Nuclear Reactor Laboratory. “The ATFs that people are developing in the fission community are based on concepts that are at least 30 years old, but our new fuel design is getting to the irradiation testing point incredibly quickly,” says Shirvan. “On a very small budget and very short time scale, we’re able to do this because our team has integrated materials science and nuclear power engineering, which really accelerates the process of achieving technological readiness.” New accident-tolerant fuel design, capable of providing additional layer of containment while packing three to four times the fuel mass per volume as compared to competing fuel forms with similar safety characteristics. Image: Koroush Shirvan After his PhD thesis invalidates an old assumption, Norman Cao wonders what’s next. Thu, 04 Jun 2020 15:30:01 -0400 Paul Rivenberg | Department of Nuclear Science and Engineering “What are some challenges in controlling plasma and what are your solutions? What is the most effective type of fusion device? What are some difficulties in sustaining fusion conditions? What are some obstacles to receiving fusion funding?” For the past four years, graduate student Norman Cao ’15 PhD ’20 has been the Plasma Science and Fusion Center’s (PSFC’s) go-to “answer man,” replying to questions like these emailed by students and members of the general public interested in getting a deeper understanding of fusion and its potential as a future energy source. Lately, Cao has had questions of his own to answer, and scientific “lore” to debunk, in a PhD thesis that seeks to correct a popularly held and intuitive belief about plasma turbulence. And as he prepares to start a postdoc instructorship at New York University’s Courant Institute of Mathematical Sciences he questions, in a world affected by Covid-19, exactly where will he be living, and how will he be performing his duties as teacher and researcher. Cao arrived at MIT an undergraduate nine years ago. Originally majoring in aerospace engineering, he changed course as he neared the end of his senior year. Having interned at several industries, he realized his passion was for research, rather than for engineering or applied work. “These aerospace industries had less of a focus on fundamental science,” he explains. “It didn’t interest me to figure out how to get a couple percent better performance on an airplane wing. But I’d always had a passing interest in fusion from reading science fiction, where it is always the energy source used in future worlds,” he continues. “How do you power these spaceships? It’s got to be fusion, of course.” A senior-year course in plasma science taught by Department of Nuclear Science and Engineering (NSE) Professor Anne White helped stir that interest into something more profound. When he was accepted into the NSE graduate program at MIT, he was ready to focus on fusion at the PSFC. There, with the guidance of senior research scientist John Rice, Cao was able to experiment on the Alcator C-Mod tokamak, the center’s signature fusion device until it was shut down in September 2016. The data gathered on this device are the basis for his thesis on plasma turbulence. Tokamaks like Alcator C-Mod use magnetic coils wrapped around a toroidal vacuum chamber to confine hot plasma, with the goal of making it hot and dense enough for fusion to occur. Turbulence in the plasma works against this, transporting intense heat from the center of the tokamak to the cooler edge, frustrating attempts to maintain fusion reactions. Cao wanted to assess whether or not certain assumptions about the behavior of plasma turbulence actually held up to experiment. The misconception is an easy one to embrace. Since the plasma can be shown to be unstable, it seems logical to link calculations of stability to the observed plasma transport. “In this case,” says Cao, “a common intuition arises that changes in the observed turbulent transport can be traced back to changes in the characteristics of the linear instabilities present in the plasma.” To challenge this belief, Cao performed so-called “rotation reversal hysteresis” experiments. Hysteresis is the dependence of a system’s observable state on its history, such as a magnet “remembering” the previous direction of the applied magnetic field during the process of magnetization, even when the applied magnetic field is removed or reversed. For his experiments, this hysteresis implied that there was a range of densities where the turbulent flows could make the plasma rotate, like a tire spinning around an axle, in either the same direction as the plasma current or the opposite direction. Taking advantage of the coexistence of different states of rotation at the same mean plasma density and temperatures, Cao was able to show a change in turbulence that occurred without a concurrent change in linear instability characteristics. “Part of my research was testing this gut feeling people have and showing that ‘No, it’s only a gut feeling.’ It does not end up matching the reality of what the experiment shows. The second part was trying to fill in the blanks. Now that we know that this assumption is conclusively invalid, we’d like to replace it with something better. What sort of intuition should we replace that previous intuition with?” Cao defended his thesis via Zoom, the first at the PSFC to do so. He credits the NSE department for walking him through the process, making it “surprisingly smooth.” Although he’s adjusted to Zoom, he found working remotely on his thesis presentation a challenge. “It feels lonely working on your own and not seeing others. It’s one of those things you don’t realize you miss until you don’t have it. Not being able to pop in and say hi to my advisor. I think not having that same level of community support is something that made it difficult to personally go through the process.” He looks forward to teaching at Courant, though uncertain how virtual his first semester will be. His experience with PSFC outreach, not only answering questions online but in person, giving tours and performing hands-on demonstrations, has fostered his versatility as a teacher. “Long term, for the health of academia, for the health of science, for the health of human progress in general, it’s important to get people involved in the frontiers of science. I personally have a lot of people to thank who supported me through their efforts — like high school teachers, my parents, others I’ve met who have really gone above and beyond in mentoring and tutoring and sparking interest in doing these things. I think it’s important to pay this effort forward and get other people excited, to help them realize it’s OK to be excited about math and science; it’s OK to be passionate about doing something that is difficult, but at the same time fun and rewarding.”  No question. Cao’s research is supported by the U.S. Department of Energy Office of Fusion Energy Sciences. “Long term, for the health of academia, for the health of science, for the health of human progress in general, it’s important to get people involved in the frontiers of science,” says Norman Cao ’15, PhD ’20. Photo: Paul Rivenberg Doctoral candidate Supratim Das wants the world to know how to make longer-lasting batteries that charge mobile phones and electric cars. Mon, 01 Jun 2020 15:00:01 -0400 Zain Humayun | School of Engineering Supratim Das’s quest for the perfect battery began in the dark. Growing up in Kolkata, India, Das saw that a ready supply of electric power was a luxury his family didn’t have. “I wanted to do something about it,” Das says. Now a fourth-year PhD candidate in MIT chemical engineering who’s months away from defending his thesis, he’s been investigating what causes the batteries that power the world’s mobile phones and electric cars to deteriorate over time. Lithium-ion batteries, so-named for the movement of lithium ions that make them work, power most rechargeable devices today. The element lithium has properties that allow lithium-ion batteries to be both portable and powerful; the 2019 Nobel Prize in Chemistry was awarded to scientists who helped develop them in the late 1970s. But despite their widespread use, lithium-ion batteries, essentially a black box during operation, harbor mysteries that prevent scientists from unlocking their full potential. Das is determined to demystify them, by first understanding their flaws.  In principle, rechargeable batteries shouldn’t expire. In practice, however, they can only be recharged a finite number of times before they lose their ability to hold a charge. An ordinary battery eventually stops working when the terminals of the battery — called electrodes — are permanently altered by the ions passing from one terminal of the battery to the other. In a rechargeable battery, the electrodes recover when an external charger sends those ions back where they came from.  Lithium ion batteries work the same way. Typically, one electrode is made of graphite, and the other of lithium compounds with transition metals such as iron, cobalt, or nickel. At the lithium electrode, lithium atoms part ways with their electrons, swim through the battery fluid (electrolyte), and wait at the other electrode. Meanwhile, the electrons take the long way around. They flow out the battery, through a device that needs the power, and into the second electrode, where they rejoin the lithium ions. When a mobile phone is plugged in to be charged, the ions and electrons retrace their steps, and the battery can be used again. When a battery is charged, however, not all the lithium ions make it back. Every charging cycle leaves ions straggling at the graphite electrode, and the battery loses capacity over time. Das found this perplexing, because it meant that draining a phone’s battery didn’t harm it, but recharging it did. He addressed this conundrum in a couple of open-access academic publications in 2019.  There was also another problem. When a battery is “fast-charged” — a feature that comes with many of the latest electronics — lithium ions start layering (plating) over the carbon electrode, instead of transporting (intercalating) into the material. Prolonged lithium plating can cause uncontrolled growth of fractal-like dendrites. This can cause short-circuiting, even fires.  In his doctoral research, Das and collaborators have been able to understand the microscopic changes that degrade a battery’s electrodes over its lifetime, and develop multiscale physics-based models to predict them in a robust manner at the macro-scale. Such multiscale models can aid battery manufacturers to substantially reduce battery health diagnostics costs before it is incorporated into a device, and make batteries safer for consumers. In his latest project, he’s using that knowledge to investigate the best way of charging a lithium-ion battery without damaging it. Das hopes his contributions help scientists achieve further breakthroughs in battery science and make batteries safer, especially when the latest technology is often closely guarded by private companies. “What our group is trying to do is improve the quality of open access academic literature,” Das says. “So that when other people are trying to start their research in batteries, they don’t have to start at the theory from five to 10 years ago.” Das is well-placed to walk between the worlds of academia and industry.  As an undergraduate in Indian Institute of Technology (IIT) Delhi, Das learned that chemical engineers could use equations and experiments to invent technology like drugs and semi-conductors. “Just the fact that here I was in college, learning something that gave me the power to potentially impact the lives of N number of people in a positive manner, was utterly fascinating to me,” Das says. He also interned at a consumer goods company, where he realized that academia would allow him more freedom to pursue ambitious ideas. In his sophomore year, Das wrote to a professor at the Hong Kong University of Science and Technology, seeking an opportunity to do research. He flew out that summer, and spent weeks learning about high-power lithium-ion batteries. “It was an eye-opening experience,” Das recalls. He returned to his coursework, but the idea of working on batteries had taken hold. “I never thought that something I can do with my own hands can potentially make impact at the scale that battery technology does,” Das says. He continued working on research projects and made key contributions in the field of multiphase chemical reaction engineering during his undergraduate degree, and eventually wound up applying to the graduate program at MIT. In his second year of graduate work, Das spent a semester as a technical consultant for Shell in Houston, Texas and Emirates Global Aluminum in Dubai. There, he learned lessons that would prove invaluable in his graduate work. “It taught me problem formulation,” Das says. “Identifying what is relevant for stakeholders; what to work on so as to best use the team’s skill sets; how to distribute your time.”  After Das’s experience in the field, he discovered that as a scientist he could share valuable knowledge about battery research and the future of the technology with energy economists. He also realized that policymakers considered their own criteria when investing in technology for the future. Das believed that such a perspective would help him inform policy decisions as a scientist, so he decided that after completing his PhD, he would pursue an MBA focusing on energy economics and policy at MIT’s Sloan School of Management. “It will allow me to contribute more to society if I’m able to act as a bridge between someone who understands the hardcore, microscopic physics of a battery, and someone who understands the economic and policy implications of introducing that battery into a vehicle or a grid,” Das says. Das believes that the program, which begins next fall, will allow him to work with other energy experts who bring their own knowledge and skills to the table. He understands the power of collaboration well: at college, Das was elected president of a dorm of 450-plus residents and worked with students and administration to introduce new facilities and events on campus. After arriving in Cambridge, Massachusetts, Das helped other students manage Ashdown House, represented chemical engineering students on the Graduate Student Advisory Board, and served in the leadership team for the MIT Energy Club, spearheading the organization of MIT EnergyHack 2019. He also launched a community service initiative within the Department of Chemical Engineering; once a week, students mentor school children and volunteer at nonprofits in Cambridge. He was able to attract funding for his initiative and was awarded by the department for successfully mobilizing 80-plus students in the community within the span of a year. “I’m constantly surprised at what we can achieve when we work with other people,” Das says.  After all, other people have helped Das make it this far. “I owe a lot of success to a number of sacrifices my mom made for me, including giving up her own career,” he says. At MIT, he feels fortunate to have met mentors like his advisor, Martin Bazant, and Practice School directors Robert Fisher and Brian Stutts, and the many colleagues who have offered answers to his questions. “Here, I’ve discovered what it means to synergize with really smart people who are really passionate — and really nice at the same time,” Das says. “Grateful is the one word I’d use.” Supratim Das is determined to demystify lithium-ion batteries, by first understanding their flaws. Photo: Lillie Paquette/School of Engineering Three MIT teams to explore novel ways to reduce operations and maintenance costs of advanced nuclear reactors. Wed, 27 May 2020 17:00:01 -0400 Department of Nuclear Science and Engineering Nuclear energy is a low-carbon energy source that is vital to decreasing carbon emissions. A critical factor in its continued viability as a future energy source is finding novel and innovative ways to improve operations and maintenance (O&M) costs in the next generation of advanced reactors. The U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) established the Generating Electricity Managed by Intelligent Nuclear Assets (GEMINA) program to do exactly this. Through $27 million in funding, GEMINA is accelerating research, discovery, and development of new digital technologies that would produce effective and sustainable reductions in O&M costs. Three MIT research teams have received APRA-E GEMINA awards to generate critical data and strategies to reduce O&M costs for the next generation of nuclear power plants to make them more economical, flexible, and efficient. The MIT teams include researchers from Department of Nuclear Science and Engineering (NSE), the Department of Civil and Environmental Engineering, and the MIT Nuclear Reactor Laboratory. By leveraging state-of-art in high-fidelity simulations and unique MIT research reactor capabilities, the MIT-led teams will collaborate with leading industry partners with practical O&M experience and automation to support the development of digital twins. Digital twins are virtual replicas of physical systems that are programmed to have the same properties, specifications, and behavioral characteristics as actual systems. The goal is to apply artificial intelligence, advanced control systems, predictive maintenance, and model-based fault detection within the digital twins to inform the design of O&M frameworks for advanced nuclear power plants. In a project focused on developing high-fidelity digital twins for the critical systems in advanced nuclear reactors, NSE professors Emilio Baglietto and Koroush Shirvan will collaborate with researchers from GE Research and GE Hitachi. The GE Hitachi BWRX-300, a small modular reactor designed to provide flexible energy generation, will serve as a reference design. BWRX-300 is a promising small modular reactor concept that aims to be competitive with natural gas to realize market penetration in the United States. The team will assemble, validate, and exercise high-fidelity digital twins of the BWRX-300 systems. Digital twins address mechanical and thermal fatigue failure modes that drive O&M activities well beyond selected BWRX-300 components and extend to all advanced reactors where a flowing fluid is present. The role of high-fidelity resolution is central to the approach, as it addresses the unique challenges of the nuclear industry. NSE will leverage the tremendous advancements they have achieved in recent years to accelerate the transition of the nuclear industry toward high-fidelity simulations in the form of computational fluid dynamics. The high spatial and time resolution accuracy of the simulations, combined with the AI-enabled digital twins, offer the opportunity to deliver predictive maintenance approaches that can greatly reduce the operating cost of nuclear stations. GE Research represents an ideal partner, given their tremendous experience in developing digital twins and close link to GE Hitachi and BWRX-300 design team. This team is particularly well position to tackle regulatory challenges of applying digital twins to safety-grade components through explicit characterization of uncertainties. This three-year MIT-led project is supported by an award of $1,787,065. MIT Principal Research Engineer and Interim Director of the Nuclear Reactor Lab Gordon Kohse will lead a collaboration with MPR Associates to generate critical irradiation data to be used in digital twinning of molten-salt reactors (MSRs). MSRs produce radioactive materials when nuclear fuel is dissolved in a molten salt at high temperature and undergoes fission as it flows through the reactor core. Understanding the behavior of these radioactive materials is important for MSR design and for predicting and reducing O&M costs — a vital step in bringing safe, clean, next-generation nuclear power to market. The MIT-led team will use the MIT nuclear research reactor’s unique capability to provide data to determine how radioactive materials are generated and transported in MSR components. Digital twins of MSRs will require this critical data, which is currently unavailable. The MIT team will monitor radioactivity during and after irradiation of molten salts containing fuel in materials that will be used in MSR construction. Along with Kohse, the MIT research team includes David Carpenter and Kaichao Sun from the MIT Nuclear Reactor Laboratory, and Charles Forsberg and Professor Mingda Li from NSE. Storm Kauffman and the MPR Associates team bring a wealth of nuclear industry experience to the project and will ensure that the data generated aligns with the needs of reactor developers. This two-year project is supported by an award of $899,825. In addition to these two MIT-led projects, a third MIT team will work closely with the Electric Power Research Institute (EPRI) on a new paradigm for reducing advanced reactor O&M. This is a proof-of-concept study that will explore how to move away from the traditional maintenance and repair approach. The EPRI-led project will examine a “replace and refurbish” model in which components are intentionally designed and tested for shorter and more predictable lifetimes with the potential for game-changing O&M cost savings. This approach is similar to that adopted by the commercial airline industry, in which multiple refurbishments — including engine replacement — can keep a jet aircraft flying economically over many decades. The study will evaluate several advanced reactor designs with respect to cost savings and other important economic benefits, such as increased sustainability for suppliers. The MIT team brings together Jeremy Gregory from the Department of Civil and Environmental Engineering, Lance Snead from the Nuclear Reactor Laboratory, and professors Jacopo Buongiorno and Koroush Shirvan from NSE.  “This collaborative project will take a fresh look at reducing the operation and maintenance cost by allowing nuclear technology to better adapt to the ever-changing energy market conditions. MIT’s role is to identify cost-reducing pathways that would be applicable across a range of promising advanced reactor technologies. Particularly, we need to incorporate latest advancements in material science and engineering along with civil structures in our strategies,” says MIT project lead Shirvan. The advances by these three MIT teams, along with the six other awardees in the GEMINA program, will provide a framework for more streamlined O&M costs for next-generation advanced nuclear reactors — a critical factor to being competitive with alternative energy sources. MIT teams in the GEMINA program will provide a framework for more streamlined operations and maintenance costs for next-generation advanced nuclear reactors. Photo: Yakov Ostrovsky Modeling study shows battery reuse systems could be profitable for both electric vehicle companies and grid-scale solar operations. Fri, 22 May 2020 00:00:01 -0400 David L. Chandler | MIT News Office As electric vehicles rapidly grow in popularity worldwide, there will soon be a wave of used batteries whose performance is no longer sufficient for vehicles that need reliable acceleration and range. But a new study shows that these batteries could still have a useful and profitable second life as backup storage for grid-scale solar photovoltaic installations, where they could perform for more than a decade in this less demanding role.The study, published in the journal Applied Energy, was carried out by six current and former MIT researchers, including postdoc Ian Mathews and professor of mechanical engineering Tonio Buonassisi, who is head of the Photovoltaics Research Laboratory.As a test case, the researchers examined in detail a hypothetical grid-scale solar farm in California. They studied the economics of several scenarios: building a 2.5-megawatt solar farm alone; building the same array along with a new lithium-ion battery storage system; and building it with a battery array made of repurposed EV batteries that had declined to 80 percent of their original capacity, the point at which they would be considered too weak for continued vehicle use.They found that the new battery installation would not provide a reasonable net return on investment, but that a properly managed system of used EV batteries could be a good, profitable investment as long as the batteries cost less than 60 percent of their original price. Not so easyThe process might sound straightforward, and it has occasionally been implemented in smaller-scale projects, but expanding that to grid scale is not simple, Mathews explains. “There are many issues on a technical level. How do you screen batteries when you take them out of the car to make sure they’re good enough to reuse? How do you pack together batteries from different cars in a way that you know that they’ll work well together, and you won’t have one battery that’s much poorer than the others and will drag the performance of the system down?”On the economic side, he says, there are also questions: “Are we sure that there’s enough value left in these batteries to justify the cost of taking them from cars, collecting them, checking them over, and repackaging them into a new application?” For the modeled case under California’s local conditions, the answer seems to be a solid yes, the team found.The study used a semiempirical model of battery degradation, trained using measured data, to predict capacity fade in these lithium-ion batteries under different operating conditions, and found that the batteries could achieve maximum lifetimes and value by operating under relatively gentle charging and discharging cycles — never going above 65 percent of full charge or below 15 percent. This finding challenges some earlier assumptions that running the batteries at maximum capacity initially would provide the most value.“I’ve talked to people who’ve said the best thing to do is just work your battery really hard, and front load all your revenue,” Mathews says. “When we looked at that, it just didn’t make sense at all.” It was clear from the analysis that maximizing the lifetime of the batteries would provide the best returns.How long will they last?One unknown factor is just how long the batteries can continue to operate usefully in this second application. The study made a conservative assumption, that the batteries would be retired from their solar-farm backup service after they had declined down to 70 percent of their rated capacity, from their initial 80 percent (the point when they were retired from EV use). But it may well be, Mathews says, that continuing to operate down to 60 percent of capacity or even lower might prove to be safe and worthwhile. Longer-term pilot studies will be required to determine that, he says. Many electric vehicle manufacturers are already beginning to do such pilot studies.“That’s a whole area of research in itself,” he says, “because the typical battery has multiple degradation pathways. Trying to figure out what happens when you move into this more rapid degradation phase, it’s an active area of research.” In part, the degradation is determined by the way the batteries are controlled. “So, you might actually adapt your control algorithms over the lifetime of the project, to just really push that out as far as possible,” he says. This is one direction the team will pursue in their ongoing research, he says. “We think this could be a great application for machine-learning methods, trying to figure out the kind of intelligent methods and predictive analytics that adjust those control policies over the life of the project.”The actual economics of such a project could vary widely depending on the local regulatory and rate-setting structures, he explains. For example, some local rules allow the cost of storage systems to be included in the overall cost of a new renewable energy supply, for rate-setting purposes, and others do not. The economics of such systems will be very site specific, but the California case study is intended to be an illustrative U.S. example.“A lot of states are really starting to see the benefit that storage can provide,” Mathews says. “And this just shows that they should have an allowance that somehow incorporates second-life batteries in those regulations. That could be favorable for them.”A recent report from McKinsey Corp. shows that as demand for backup storage for renewable energy projects grows between now and 2030, second use EV batteries could potentially meet half of that demand, Mathews says. Some EV companies, he says, including Rivian, founded by an MIT alumnus, are already designing their battery packs specifically to make this end-of-life repurposing as easy as possible.Mathews says that “the point that I made in the paper was that technically, economically, … this could work.” For the next step, he says, “There’s a lot of stakeholders who would need to be involved in this: You need to have your EV manufacturer, your lithium ion battery manufacturer, your solar project developer, the power electronics guys.” The intent, he says, “was to say, ‘Hey, you guys should actually sit down and really look at this, because we think it could really work.’”The study team included postdocs Bolum Xu and Wei He, MBA student Vanessa Barreto, and research scientist Ian Marius Peters. The work was supported by the European Union’s Horizon 2020 research program, the DoE-NSF ERF for Quantum Sustainable Solar Technologies (QESST) and the Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology (SMART). An MIT study shows that electrical vehicle batteries could have a useful and profitable second life as backup storage for grid-scale solar photovoltaic installations, where they could perform for more than a decade in this less demanding role. This image shows a ‘cut-away’ view of a lithium-ion battery over a background of cars and solar panels. Image: MIT News A new framework for learning from each other. Thu, 21 May 2020 14:50:01 -0400 Nancy W. Stauffer | MIT Energy Initiative In recent decades, urban populations in China’s cities have grown substantially, and rising incomes have led to a rapid expansion of car ownership. Indeed, China is now the world’s largest market for automobiles. The combination of urbanization and motorization has led to an urgent need for transportation policies to address urban problems such as congestion, air pollution, and greenhouse gas emissions. For the past three years, an MIT team led by Joanna Moody, research program manager of the MIT Energy Initiative’s Mobility Systems Center, and Jinhua Zhao, the Edward H. and Joyce Linde Associate Professor in the Department of Urban Studies and Planning (DUSP) and director of MIT’s JTL Urban Mobility Lab, has been examining transportation policy and policymaking in China. “It’s often assumed that transportation policy in China is dictated by the national government,” says Zhao. “But we’ve seen that the national government sets targets and then allows individual cities to decide what policies to implement to meet those targets.” Many studies have investigated transportation policymaking in China’s megacities like Beijing and Shanghai, but few have focused on the hundreds of small- and medium-sized cities located throughout the country. So Moody, Zhao, and their team wanted to consider the process in these overlooked cities. In particular, they asked: how do municipal leaders decide what transportation policies to implement, and can they be better enabled to learn from one another’s experiences? The answers to those questions might provide guidance to municipal decision-makers trying to address the different transportation-related challenges faced by their cities. The answers could also help fill a gap in the research literature. The number and diversity of cities across China has made performing a systematic study of urban transportation policy challenging, yet that topic is of increasing importance. In response to local air pollution and traffic congestion, some Chinese cities are now enacting policies to restrict car ownership and use, and those local policies may ultimately determine whether the unprecedented growth in nationwide private vehicle sales will persist in the coming decades. Policy learning Transportation policymakers worldwide benefit from a practice called policy-learning: Decision-makers in one city look to other cities to see what policies have and haven’t been effective. In China, Beijing and Shanghai are usually viewed as trendsetters in innovative transportation policymaking, and municipal leaders in other Chinese cities turn to those megacities as role models. But is that an effective approach for them? After all, their urban settings and transportation challenges are almost certainly quite different. Wouldn’t it be better if they looked to “peer” cities with which they have more in common? Moody, Zhao, and their DUSP colleagues — postdoc Shenhao Wang and graduate students Jungwoo Chun and Xuenan Ni, all in the JTL Urban Mobility Lab — hypothesized an alternative framework for policy-learning in which cities that share common urbanization and motorization histories would share their policy knowledge. Similar development of city spaces and travel patterns could lead to the same transportation challenges, and therefore to similar needs for transportation policies. To test their hypothesis, the researchers needed to address two questions. To start, they needed to know whether Chinese cities have a limited number of common urbanization and motorization histories. If they grouped the 287 cities in China based on those histories, would they end up with a moderately small number of meaningful groups of peer cities? And second, would the cities in each group have similar transportation policies and priorities? Grouping the cities Cities in China are often grouped into three “tiers” based on political administration, or the types of jurisdictional roles the cities play. Tier 1 includes Beijing, Shanghai, and two other cities that have the same political powers as provinces. Tier 2 includes about 20 provincial capitals. The remaining cities — some 260 of them — all fall into Tier 3. These groupings are not necessarily relevant to the cities’ local urban and transportation conditions. Moody, Zhao, and their colleagues instead wanted to sort the 287 cities based on their urbanization and motorization histories. Fortunately, they had relatively easy access to the data they needed. Every year, the Chinese government requires each city to report well-defined statistics on a variety of measures and to make them public. Among those measures, the researchers chose four indicators of urbanization — gross domestic product per capita, total urban population, urban population density, and road area per capita — and four indicators of motorization — the number of automobiles, taxis, buses, and subway lines per capita. They compiled those data from 2001 to 2014 for each of the 287 cities. The next step was to sort the cities into groups based on those historical datasets — a task they accomplished using a clustering algorithm. For the algorithm to work well, they needed to select parameters that would summarize trends in the time series data for each indicator in each city. They found that they could summarize the 14-year change in each indicator using the mean value and two additional variables: the slope of change over time and the rate at which the slope changes (the acceleration). Based on those data, the clustering algorithm examined different possible numbers of groupings, and four gave the best outcome in terms of the cities’ urbanization and motorization histories. “With four groups, the cities were most similar within each cluster and most different across the clusters,” says Moody. “Adding more groups gave no additional benefit.” The four groups of similar cities are as follows. Cluster 1: 23 large, dense, wealthy megacities that have urban rail systems and high overall mobility levels over all modes, including buses, taxis, and private cars. This cluster encompasses most of the government’s Tier 1 and Tier 2 cities, while the Tier 3 cities are distributed among Clusters 2, 3, and 4. Cluster 2: 41 wealthy cities that don’t have urban rail and therefore are more sprawling, have lower population density, and have auto-oriented travel patterns. Cluster 3: 134 medium-wealth cities that have a low-density urban form and moderate mobility fairly spread across different modes, with limited but emerging car use. Cluster 4: 89 low-income cities that have generally lower levels of mobility, with some public transit buses but not many roads. Because people usually walk, these cities are concentrated in terms of density and development. City clusters and policy priorities The researchers’ next task was to determine whether the cities within a given cluster have transportation policy priorities that are similar to each other — and also different from those of cities in the other clusters. With no quantitative data to analyze, the researchers needed to look for such patterns using a different approach. First, they selected 44 cities at random (with the stipulation that at least 10 percent of the cities in each cluster had to be represented). They then downloaded the 2017 mayoral report from each of the 44 cities. Those reports highlight the main policy initiatives and directions of the city in the past year, so they include all types of policymaking. To identify the transportation-oriented sections of the reports, the researchers performed keyword searches on terms such as transportation, road, car, bus, and public transit. They extracted any sections highlighting transportation initiatives and manually labeled each of the text segments with one of 21 policy types. They then created a spreadsheet organizing the cities into the four clusters. Finally, they examined the outcome to see whether there were clear patterns within and across clusters in terms of the types of policies they prioritize. “We found strikingly clear patterns in the types of transportation policies adopted within city clusters and clear differences across clusters,” says Moody. “That reinforced our hypothesis that different motorization and urbanization trajectories would be reflected in very different policy priorities.” Here are some highlights of the policy priorities within the clusters. The cities in Cluster 1 have urban rail systems and are starting to consider policies around them. For example, how can they better connect their rail systems with other transportation modes — for instance, by taking steps to integrate them with buses or with walking infrastructure? How can they plan their land use and urban development to be more transit-oriented, such as by providing mixed-use development around the existing rail network? Cluster 2 cities are building urban rail systems, but they’re generally not yet thinking about other policies that can come with rail development. They could learn from Cluster 1 cities about other factors to take into account at the outset. For example, they could develop their urban rail with issues of multi-modality and of transit-oriented development in mind. In Cluster 3 cities, policies tend to emphasize electrifying buses and providing improved and expanded bus service. In these cities with no rail networks, the focus is on making buses work better. Cluster 4 cities are still focused on road development, even within their urban areas. Policy priorities often emphasize connecting the urban core to rural areas and to adjacent cities — steps that will give their populations access to the region as a whole, expanding the opportunities available to them. Benefits of a “mixed method” approach Results of the researchers’ analysis thus support their initial hypothesis. “Different urbanization and motorization trends that we captured in the clustering analysis are reflective of very different transportation priorities,” says Moody. “That match means we can use this approach for further policymaking analysis.” At the outset, she viewed their study as a “proof of concept” for performing transportation policy studies using a mixed-method approach. Mixed-method research involves a blending of quantitative and qualitative approaches. In their case, the former was the mathematical analysis of time series data, and the latter was the in-depth review of city government reports to identify transportation policy priorities. “Mixed-method research is a growing area of interest, and it’s a powerful and valuable tool,” says Moody. She did, however, find the experience of combining the quantitative and qualitative work challenging. “There weren’t many examples of people doing something similar, and that meant that we had to make sure that our quantitative work was defensible, that our qualitative work was defensible, and that the combination of them was defensible and meaningful,” she says. The results of their work confirm that their novel analytical framework could be used in other large, rapidly developing countries with heterogeneous urban areas. “It’s probable that if you were to do this type of analysis for cities in, say, India, you might get a different number of city types, and those city types could be very different from what we got in China,” says Moody. Regardless of the setting, the capabilities provided by this kind of mixed method framework should prove increasingly important as more and more cities around the world begin innovating and learning from one another how to shape sustainable urban transportation systems. This research was supported by the MIT Energy Initiative’s Mobility of the Future study. Information about the study, its participants and supporters, and its publications is available at Using a novel methodology, MITEI researcher Joanna Moody and Associate Professor Jinhua Zhao uncovered patterns in the development trends and transportation policies of China’s 287 cities — including Fengcheng, shown here — that may help decision-makers learn from one another. Photo: blake.thornberry/Flickr Graduate student Erica Salazar tackles a magnetic engineering challenge. Thu, 21 May 2020 14:35:01 -0400 Peter Dunn | Department of Nuclear Science and Engineering The promise of fusion energy has grown substantially in recent years, in large part because of novel high-temperature superconducting (HTS) materials that can shrink the size and boost the performance of the extremely powerful magnets needed in fusion reactors. Realizing that potential is a complex engineering challenge, which nuclear science and engineering student Erica Salazar is taking up in her doctoral studies. Salazar works at MIT’s Plasma Science and Fusion Center (PSFC) on the SPARC project, an ambitious fast-track program being conducted in collaboration with MIT spinout Commonwealth Fusion Systems (CFS). The goal is development of a fusion energy experiment to demonstrate net energy gain at unprecedentedly small size and to validate the new magnet technology in a high-field fusion device. Success would be a major accomplishment in the effort to make safe, carbon-free fusion power ready for the world’s electrical grid by the 2030s, as part of the broader push to control climate change. A fundamental challenge is that fusion of nuclei takes place only at extreme temperatures, like those found in the cores of stars. No physical vessel can contain such conditions, so one approach to harnessing fusion involves creating a “bottle” of magnetic fields within a reactor chamber. To succeed, this magnetic-confinement approach must be capable of containing and controlling a super-heated plasma for extended periods, and that in turn requires steady, stable, predictable operation from the magnets involved, even as they deliver unprecedented levels of performance. In pursuit of that goal, Salazar is drawing on knowledge gained during a five-year stint at General Atomics, where she worked on magnet manufacturing for the ITER international fusion reactor project. It, like SPARC, uses a magnetic-confinement approach, and Salazar commissioned and managed the reaction heat treatment process for ITER’s 120-ton superconducting modules and helped design and operate a cryogenic full-current test station. “That experience is very helpful,” she notes. “Even though the ITER magnets utilize low-temperature superconductors and SPARC is using HTS, there are a lot of similarities in manufacturing, and it gives a sense of which questions to ask. It’s a situation where you know enough to understand what you don’t know, and that’s really exciting. It definitely gives me motivation to work hard, go deep, and expand my efforts.” A central focus of Salazar’s work is a phenomenon called quench. It’s a common abnormality that occurs when part of a magnet’s coil shifts out of a superconducting state, where it has almost no electrical resistance, and into a normal resistive state. The resistance causes the massive current flowing through the coil, and the energy stored in the magnet, to quickly convert to heat in the affected region. That can result in the entire magnet dropping out of its superconducting state and also cause significant physical damage. Many factors can cause quench, and it is seen as unavoidable, so real-time management is essential in a practical fusion reactor. “My PhD thesis work is on understanding quench dynamics, especially in new HTS magnet designs,” explains Salazar, who is advised by Department of Nuclear Science and Engineering Professor Zach Hartwig and started engaging with the CFS team before the company’s 2018 formation. “Those new materials are so good, and they have more temperature margin, but that makes it harder to detect when there’s a localized loss of superconductivity — so it’s a good position for me as a grad student. “I hope to answer questions like, what does the quench look like? How does it propagate, and how fast? How large of a disturbance will cause a thermal runaway? With more knowledge of what a quench looks like, I can then use that information to help design novel quench-detection systems.” Addressing this type of issue is part of the SPARC program’s strategic transition away from “big plasma physics problems,” says Salazar, and toward a greater focus on the engineering challenges involved in practical implementation. While there is more to be learned from a scientific perspective, a broad consensus has emerged in the U.S. fusion community that construction of a pilot fusion power plant should be a national priority. To this end, the SPARC program takes a systemic approach to ensure broad coordination. As Salazar notes, “to devise an effective detection system, you need to be aware of the implications within the overall systems engineering approach of the project. I really like the way the project teams are designed to be fluid. Everyone knows who’s working on what, and you can sit in on meetings if you want to. We all have a limited amount of time, but the resources are there.” Salazar has helped the process by starting a popular email list that bridges the CFS and MIT social worlds, linking people who would not otherwise be connected and creating opportunities for off-hours activities together. “Working is easy; sometimes the hard part is making sure you have time for personal stuff,” she observes. She’s also active in developing and encouraging a more-inclusive MIT community culture, via involvement with a women’s group at PSFC and the launch of an Institute-wide organization, Hermanas Unidas, for Latina-identifying women students, staff, faculty, and postdocs. “It’s important to find a community with others that share or value similar cultural backgrounds. But it’s also important to see how those with similar backgrounds have done amazing things professionally or academically. Hermanas Unidas is a great community of people from all walks of life at MIT who provide mutual support and encouragement as we navigate our careers at MIT and beyond,” explains Salazar. “It’s wonderful to learn from other Latina faculty and staff at MIT — about the hardships they faced when they were in my position as a student or how, as staff members, they work to support students and connect us with other great initiatives. On the flip side, I can share with undergraduates my work experience and my decision to go to graduate school.” Looking ahead, Salazar is encouraged by the growing momentum toward fusion energy. “I had the opportunity to go to the Congressional Fusion Day event in 2016, talk to House and Senate representatives about what fusion does for the economy and technologies, and meet researchers from outside of the ITER program,” she recalls. “I hadn’t realized how big and expansive the fusion community is, and it was interesting to hear how much was going on, and exciting to know that there’s private-sector interest in investing in fusion.” And because fusion energy has such game-changing potential for the world’s electrical grid, says Salazar, “it’s cool to talk to people about it and present it in a way that shows how it will impact them. Throughout my life, I’ve always enjoyed going deep and expending my efforts, and this is such a great area for that. There’s always something new, it’s very interdisciplinary, and it benefits society.” “I really like the way the project teams are designed to be fluid. Everyone knows who’s working on what, and you can sit in on meetings if you want to. We all have a limited amount of time, but the resources are there,”. says Erica Salazar. Photo: Eric Younge After delivering novel computational methods for nuclear problems, nuclear science and engineering PhD candidate Pablo Ducru plunges into startup life. Wed, 20 May 2020 00:00:01 -0400 Leda Zimmerman | Department of Nuclear Science and Engineering Like the atomic particles he studies, Pablo Ducru seems constantly on the move, vibrating with energy. But if he sometimes appears to be headed in an unexpected direction, Ducru, a doctoral candidate in nuclear science and computational engineering, knows exactly where he is going: “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research, says Ducru, or pursuing a zero-carbon future as an entrepreneur. It can be hard catching up with Ducru. In January, he returned to Cambridge, Massachusetts, from Beijing, where he was spending a year earning a master’s degree in global affairs as a Schwarzman Scholar at Tsinghua University. He flew out just days before a travel crackdown in response to Covid-19. “This year has been intense, juggling my PhD work and the master’s overseas,” he says. “But I needed to do it, to get a 360-degree understanding of the problem of climate change, which isn’t just a technological problem, but also one involving economics, trade, policy, and finance.” Schwarzman Scholars, an international cohort selected on the basis of academic excellence and leadership potential, among other criteria, focus on critical challenges of the 21st century. While all the students must learn the basics of international relations and China’s role in the world economy, they can tailor their studies according to their interests. Ducru is incorporating nuclear science into his master’s program. “It is at the core of many of the world’s key problems, from climate change to arms controls, and it also impacts artificial intelligence by advancing high-performance computing,” he says. A Franco-Mexican raised in Paris, Ducru arrived at nuclear science by way of France’s selective academic system. He excelled in math, history, and English during his high school years. “I realized technology is what drives history,” he says. “I thought that if I wanted to make history, I needed to make technology.” He graduated from Ecole Polytechnique specializing in physics and applied mathematics, and with a major in energies of the 21st century. Creating computational shortcuts Today, as a member of MIT’s Computational Reactor Physics Group (CRPG), Ducru is deploying his expertise in singular ways to help solve some of the toughest problems in nuclear science. Nuclear engineers, hoping to optimize efficiency and safety in current and next-generation reactor designs, are on a quest for high-fidelity nuclear simulations. At such fine-grained levels of modeling, the behavior of subatomic particles is sensitive to minute uncertainties in temperature change, or differences in reactor core geometry, for instance. To quantify such uncertainties, researchers currently need countless costly hours of supercomputer time to simulate the behaviors of billions of neutrons under varying conditions, estimating and then averaging outcomes. “But with some problems, more computing won’t make a difference,” notes Ducru. “We have to help computers do the work in smarter ways.” To accomplish this task, he has developed new formulations for characterizing basic nuclear physics that make it much easier for a computer to solve problems: “I dig into the fundamental properties of physics to give nuclear engineers new mathematical algorithms that outperform thousands of times over the old ways of computing.” With his novel statistical methods and algorithms, developed with CRPG colleagues and during summer stints at Los Alamos and Oak Ridge National Laboratories, Ducru offers “new ways of looking at problems that allow us to infer trends from uncertain inputs, such as physics, geometries, or temperatures,” he says.   These innovative tools accommodate other kinds of problems that involve computing average behaviors from billions of individual occurrences, such as bubbles forming in a turbulent flow of reactor coolant. “My solutions are quite fundamental and problem-agnostic — applicable to the design of new reactors, to nuclear imaging systems for tumor detection, or to the plutonium battery of a Mars rover,” he says. “They will be useful anywhere scientists need to lower costs of high-fidelity nuclear simulations.” But Ducru won’t be among the scientists deploying these computational advances. “I think we’ve done a good job, and others will continue in this area of research,” he says. “After six years of delving deep into quantum physics and statistics, I felt my next step should be a startup.” Scaling up with shrimp As he pivots away from academia and nuclear science, Ducru remains constant to his mission of addressing the climate problem. The result is Torana, a company Ducru and a partner started in 2018 to develop the financial products and services aquaculture needs to sustainably feed the world. “I thought we could develop a scalable zero-carbon food,” he says. “The world needs high-nutrition proteins to feed growing populations in a climate-friendly way, especially in developing nations.”  Land-based protein sources such as livestock can take a heavy toll on the environment. But shrimp, on the other hand, are “very efficient machines, scavenging crud at the bottom of the ocean and converting it into high-quality protein,” notes Ducru, who received the 2018 MIT Water Innovation Prize and the 2019 Rabobank-MIT Food and Agribusiness Prize, and support from MIT Sandbox to help develop his aquaculture startup (then called Velaron). Torana is still in early stages, and Ducru hopes to apply his modeling expertise to build a global system of sustainable shrimp farming. His Schwarzman master thesis studies the role of aquaculture in our future global food system, with a focus on the shrimp supply chain. In response to the Covid-19 pandemic, Ducru relocated to the family farm in southern France, which he helps run while continuing to follow the Tsinghua masters online and work on his MIT PhD. He is tweaking his business plans, and putting the final touches on his PhD research, including submitting several articles for publication. While it’s been challenging keeping all these balls in the air, he has supportive mentors — “Benoit Forget [CRPG director] has backed almost all my crazy ideas,” says Ducru. “People like him make MIT the best university on Earth.” Ducru is already mapping out his next decade or so: grow his startup, and perhaps create a green fund that could underwrite zero-carbon projects, including nuclear ones. “I don’t have Facebook and don’t watch online series or TV, because I prefer being an actor, creating things through my work,” he says. “I’m a scientific entrepreneur, and will continue to innovate across different realms.” “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research or pursuing a zero-carbon future as an entrepreneur, says MIT PhD candidate Pablo Ducru. Photo: Gretchen Ertl Abigail Ostriker ’16 and Addison Stark SM ’10, PhD ’15 share how their experiences with MIT’s energy programs connect them to the global energy community. Mon, 18 May 2020 14:20:01 -0400 Turner Jackson | MIT Energy Initiative Students who engage in energy studies at MIT develop an integrative understanding of energy as well as skills required of tomorrow’s energy professionals, leaders, and innovators in research, industry, policy, management, and governance. Two energy alumni recently shared their experiences as part of MIT’s energy community, and how their work connects to energy today. Abigail Ostriker ’16, who majored in applied mathematics, is now pursuing a PhD in economics at MIT, where she is conducting research into whether subsidized flood insurance causes overdevelopment. Prior to her graduate studies, she conducted two years of research into health economics with Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT. Addison Stark SM ’10, PhD ’15, whose degrees are in mechanical engineering and technology and policy, is the associate director for energy innovation at the Bipartisan Policy Center in Washington, which focuses on implementing effective policy on important topics for American citizens. He also serves as an adjunct professor at Georgetown University, where he teaches a course on clean energy innovation. Prior to these roles, he was a fellow and acting program director at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy. Q: What experiences did you have that inspired you to pursue energy studies? Stark: I grew up on a farm in rural Iowa, surrounded by a growing biofuels industry and bearing witness to the potential impacts of climate change on agriculture. I then went to the University of Iowa as an undergrad. While there, I was lucky enough to serve as one of the student representatives on a committee that put together a large decarbonization plan for the university. I recognized at the time that the university not only needed to put together a policy, but also to think about what technologies they had to procure to implement their goals. That experience increased my awareness of the big challenges surrounding climate change. I was fortunate to have attended the University of Iowa because a large percentage of the students had an environmental outlook, and many faculty members were involved with the Intergovernmental Panel on Climate Change (IPCC) and engaged with climate and sustainability issues at a time when many other science and engineering schools hadn’t to the same degree. Q: How did your time at MIT inform your eventual work in the energy space? Ostriker: I took my first economics class in my freshman fall, but I didn’t really understand what economics could do until I took Energy Economics and Policy [14.44J/15.037] with Professor Christopher Knittel at the Sloan School the following year. That class turned the field from a collection of unrealistic maximizing equations into a framework that could make sense of real people’s decisions and predict how incentives affect outcomes. That experience led me to take a class on econometrics. The combination made me feel like economics was a powerful set of tools for understanding the world — and maybe tweaking it to get a slightly better outcome. Stark: Completing my master’s in the Technology and Policy Program (TPP) and in mechanical engineering at MIT was invaluable. The focus on systems thinking that was being employed in TPP and at the MIT Energy Initiative (MITEI) has been very important in shaping my thinking around the biggest challenges in climate and energy. While pursuing my master’s degree, I worked with Daniel Cohn, a research scientist at MITEI, and Ahmed Ghoniem, a professor of mechanical engineering, who later became my PhD advisor. We looked at a lot of big questions about how to integrate advanced biofuels into today’s transportation and distribution infrastructures: Can you ship it in a pipeline? Can you transport it? Are people able to put it into infrastructure that we’ve already spent billions of dollars building out? One of the critical lessons that I learned while at MITEI — and it’s led to a lot of my thinking today — is that in order for us to have an effective energy transition, there need to be ways that we can utilize current infrastructure. Being involved with and becoming a co-president of the MIT Energy Club in 2010 truly helped to shape my experience at MIT. When I came to MIT, one of the first things that I did was attend the MIT Energy Conference. In the early days of the club and of MITEI — in ’07 — there was a certain “energy” around energy at MIT that really got a lot of us thinking about careers in the field. Q: How does your current research connect to energy, and in what ways do the fields of economics and energy connect? Ostriker: Along with my classmate Anna Russo, I am currently studying whether subsidized flood insurance causes over-development. In the U.S., many flood maps are out of date and backward-looking: Flood risk is rising due to climate change, so in many locations, insurance premiums now cost less than expected damages. This creates an implicit subsidy for risky areas that distorts price signals and may cause a high number of homes to be built. We want to estimate the size of the subsidies and the effect they have on development. It’s a challenging question because it’s hard to find a way to compare areas that seem exactly the same except for their insurance premiums. We are hoping to get there by looking at boundaries in the flood insurance maps — areas where true flood risk is the same but premiums are different. We hope that by improving our understanding of how insurance prices affect land use, we can help governments to create more efficient policies for climate resilience. Many economists are studying issues related to both energy and the environment. One definition of economics is the study of trade-offs — how to best allocate scarce resources. In energy, there are questions such as: How should we design electricity markets so that they automatically meet demand with the lowest-cost mix of generation? As the generation mix moves from almost all fossil fuels to a higher penetration of renewables, will that market design still work, or will it need to be adapted so that renewable energy companies still find it attractive to participate? In addition to theoretical questions about how markets work, economists also study the way real people or companies respond to policies. For example, if retail electricity prices started to change by the hour or by the minute, how would people’s energy use respond to that? To answer this question convincingly, you need to find a situation in which everything is almost identical between two groups, except that one group faces different prices. You can’t always do a randomized experiment, so you must find something almost like an experiment in the real world. This kind of toolkit is also used a lot in environmental economics. For instance, we might study the effect of pollution on students’ test scores. In that setting, economists’ tools of causal inference make it possible to move beyond an observed correlation to a statement that pollution had a causal effect. Q: How do you think we can make the shift toward a clean energy-based economy a more pressing issue for people across the political spectrum? Stark: If we are serious about addressing climate change as a country, we need to recognize that any policy has to be bipartisan; it will need to hit 60 votes in the Senate. Very quickly — within the next few years — we need to develop a set of robust bipartisan policies that can move us toward decarbonization by mid-century. If the IPCC recommendations are to be followed, our ultimate goal is to hit net-zero carbon emissions by 2050. What that means to me is that we need to frame up all of the benefits of a large clean energy program to address climate change. When we address climate change, one of the valuable things that’s going to happen is major investment in technology deployment and development, which involves creating jobs — which is a bipartisan issue. As we are looking to build out a decarbonized future, one thing that needs to happen is reinvesting in our national infrastructure, which is an issue that is recognized in a bipartisan sense. It’s going to require more nuance than just the pure Green New Deal approach. In order to get Republicans on board, we need to realize that investment can’t be based only on renewables. There are a lot of people whose economies depend on the continued and smart use of fossil resources. We have to think about how we develop and deploy carbon capture technologies, as these technologies are going to be integral in garnering more support from rural and conservative communities for the energy transition. The Republican Party is embracing the role of nuclear energy more than some Democrats are. The key thing is that today, nuclear is far and away the most prevalent source of zero-carbon electricity that we have. So, expanding nuclear power is a critically important piece of decarbonizing energy, and Republicans have identified that as a place where they would like to invest along with carbon capture, utilization, and storage — another technology with less enthusiasm on the environmental left. Finding ways to bridge party lines on these critical technologies is one of the biggest pieces that I think will be important in bringing about a low-carbon future. Addison Stark (left) and Abigail Ostriker Stark photo: Greg Gibson/Bipartisan Policy Center; Ostriker photo: Thomas Dattilo Chemical engineers take a step toward generating ammonia with small-scale, electrochemical reactors. Mon, 04 May 2020 10:59:59 -0400 Anne Trafton | MIT News Office Most of the world’s fertilizer is produced in large manufacturing plants, which require huge amounts of energy to generate the high temperatures and pressures needed to combine nitrogen and hydrogen into ammonia.MIT chemical engineers are working to develop a smaller-scale alternative, which they envision could be used to locally produce fertilizer for farmers in remote, rural areas, such as sub-Saharan Africa. Fertilizer is often hard to obtain in such areas because of the cost of transporting it from large manufacturing facilities.In a step toward that kind of small-scale production, the research team has devised a way to combine hydrogen and nitrogen using electric current to generate a lithium catalyst, where the reaction takes place.“In the future, if we envision how we want this to be used someday, we want a device that can breathe in air, take in water, have a solar panel hooked up to it, and be able to produce ammonia. This could be used by a farmer or a small community of farmers,” says Karthish Manthiram, an assistant professor of chemical engineering at MIT and the senior author of the study.Graduate student Nikifar Lazouski is the lead author of the paper, which appears today in Nature Catalysis. Other authors include graduate students Minju Chung and Kindle Williams, and undergraduate Michal Gala.Smaller scaleFor more than 100 years, fertilizer has been manufactured using the Haber-Bosch process, which combines atmospheric nitrogen with hydrogen gas to form ammonia. The hydrogen gas used for this process is usually obtained from methane derived from natural gas or other fossil fuels. Nitrogen is very unreactive, so high temperatures (500 degrees Celsius) and pressures (200 atmospheres) are required to make it react with hydrogen to form ammonia.Using this process, manufacturing plants can produce thousands of tons of ammonia per day, but they are expensive to run and they emit a great deal of carbon dioxide. Among all chemicals produced in large volume, ammonia is the largest contributor to greenhouse gas emissions.The MIT team set out to develop an alternative manufacturing method that could reduce those emissions, with the added benefit of decentralized production. In many parts of the world, there is little infrastructure for distributing fertilizer, making it expensive to obtain fertilizer in those regions.“The ideal characteristic of a next-generation method of making ammonia would be that it’s distributed. In other words, you could make that ammonia close to where you need it,” Manthiram says. “And ideally, it would also eliminate the CO2 footprint that otherwise exists.”While the Haber-Bosch process uses extreme heat and pressure to force nitrogen and hydrogen to react, the MIT team decided to try using electricity to achieve the same effect. Previous research has shown that applying electrical voltage can shift the equilibrium of the reaction so that it favors the formation of ammonia. However, it has been difficult to do this in an inexpensive and sustainable way, the researchers say.Most previous efforts to perform this reaction under normal temperatures and pressures have used a lithium catalyst to break the strong triple bond found in nitrogen gas molecules. The resulting product, lithium nitride, can then react with hydrogen atoms from an organic solvent to produce ammonia. However, the solvent typically used, tetrahydrofuran, or THF, is expensive and is consumed by the reaction, so it needs to be continually replaced.The MIT team came up with a way to use hydrogen gas instead of THF as the source of hydrogen atoms. They designed a mesh-like electrode that allows nitrogen gas to diffuse through it and interact with hydrogen, which is dissolved in ethanol, at the electrode surface.This stainless steel, mesh structure is coated with the lithium catalyst, produced by plating out lithium ions from solution. Nitrogen gas diffuses throughout the mesh and is converted to ammonia through a series of reaction steps mediated by lithium. This setup allows hydrogen and nitrogen to react at relatively high rates, despite the fact that they are usually not very soluble in any liquids, which makes it more challenging to react them at high rates.“This stainless steel cloth is a way of very effectively contacting nitrogen gas with our catalyst, while also having the electrical and ionic connections that are needed,” Lazouski says.Splitting waterIn most of their ammonia-producing experiments, the researchers used nitrogen and hydrogen gases flowing in from a gas cylinder. However, they also showed that they could use water as a source of hydrogen, by first electrolyzing the water and then flowing that hydrogen into their electrochemical reactor.The overall system is small enough to sit on a lab benchtop, but it could be scaled up to produce larger quantities of ammonia by connecting many modules together, Lazouski says. Another key challenge will be to improve the energy efficiency of the reaction, which now is only about 2 percent, compared to 50 to 80 percent for the Haber-Bosch reaction.“We have an overall reaction that finally looks favorable, which is a big step forward,” he says. “But we know that there’s still an energy loss problem that needs to be solved. That will be one of the major things that we want to address in future work that we’ll undertake.”In addition to serving as a production method for small batches of fertilizer, this approach could also lend itself to energy storage, Manthiram says. This idea, which is now being pursued by some scientists, calls for using electricity produced by wind or solar energy to power ammonia generation. The ammonia could then serve as a liquid fuel that would be relatively easy to store and transport.“Ammonia is such a critical molecule that can wear many different hats, and this same method of ammonia production could be used in very diverse applications,” Manthiram says.The research was funded by the National Science Foundation and the MIT Energy Initiative Seed Fund. Prior research which was foundational for the present work was supported by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). In a step toward small-scale production of ammonia fertilizer, MIT chemical engineers have devised a way to combine hydrogen and nitrogen using an electric current that generates a lithium catalyst. Image: Stock photo Textbook formulas for describing heat flow characteristics, crucial in many industries, are oversimplified, study shows. Tue, 28 Apr 2020 00:44:33 -0400 David L. Chandler | MIT News Office Whether it’s water flowing across a condenser plate in an industrial plant, or air whooshing through heating and cooling ducts, the flow of fluid across flat surfaces is a phenomenon at the heart of many of the processes of modern life. Yet, aspects of this process have been poorly understood, and some have been taught incorrectly to generations of engineering students, a new analysis shows. The study examined several decades of published research and analysis on fluid flows. It found that, while most undergraduate textbooks and classroom instruction in heat transfer describe such flow as having two different zones separated by an abrupt transition, in fact there are three distinct zones. A lengthy transitional zone is just as significant as the first and final zones, the researchers say. The discrepancy has to do with the shift between two different ways that fluids can flow. When water or air starts to flow along a flat, solid sheet, a thin boundary layer forms. Within this layer, the part closest to the surface barely moves at all because of friction, the part just above that flows a little faster, and so on, until a point where it is moving at the full speed of the original flow. This steady, gradual increase in speed across a thin boundary layer is called laminar flow. But further downsteam, the flow changes, breaking up into the chaotic whirls and eddies known as turbulent flow. The properties of this boundary layer determine how well the fluid can transfer heat, which is key to many cooling processes such as for high-performance computers, desalination plants, or power plant condensers. Students have been taught to calculate the characteristics of such flows as if there was a sudden change from laminar flow to turbulent flow. But John H. Lienhard V, the Abdul Latif Jameel Professor of Water and of mechanical engineering at MIT, made a careful analysis of published experimental data and found that this picture ignores an important part of the process. The findings were just published in the Journal of Heat Transfer.  Lienhard’s review of heat transfer data reveals a significant transition zone between the laminar and turbulent flows. This transition zone’s resistance to heat flow varies gradually between those of the two other zones, and the zone is just as long and distinctive as the laminar flow zone that precedes it.  The findings could potentially have implications for everything from the design of heat exchangers for desalination or other industrial scale processes, to understanding the flow of air through jet engines, Lienhard says. In fact, though, most engineers working on such systems understand the existence of a long transition zone, even if it’s not in the undergraduate textbooks, Lienhard notes. Now, by clarifying and quantifying the transition, this study will help to bring theory and teaching into line with real-world engineering practice. “The notion of an abrupt transition has been ingrained in heat transfer textbooks and classrooms for the past 60 or 70 years,” he says. The basic formulas for understanding flow along a flat surface are the fundamental underpinnings for all of the more complex flow situations such as airflow over a curved airplane wing or turbine blade, or for cooling space vehicles as they reenter the atmosphere. “The flat surface is the starting point for understanding how any of those things work,” Lienhard says. The theory for flat surfaces was set out by the German researcher Ernst Pohlhausen in 1921. But even so, “lab experiments usually didn’t match the boundary conditions assumed by the theory. A laboratory plate might have a rounded edge or a nonuniform temperature, so investigators in the 1940s, 50s, and 60s often ‘adjusted’ their data to force agreement with this theory,” he says. Discrepancies between otherwise good data and this theory also led to heated disagreements among specialists in the heat transfer literature. Lienhard found that researchers with the British Air Ministry had identified and partially solved the problem of nonuniform surface temperatures in 1931. “But they weren’t able to fully solve the equation they derived,” he says. “That had to wait until digital computers could be used, starting in 1949.” Meanwhile, the arguments between specialists simmered on. Lienhard says that he decided to take a look at the experimental basis for the equations that were being taught, realizing that researchers have known for decades that the transition played a significant role. “I wanted to plot data with these equations. That way, students could see how well the equations did or didn’t work,” he said.  “I looked at the experimental literature all the way back to 1930. Collecting these data made something very clear: What we were teaching was terribly oversimplified.” And the discrepancy in the description of fluid flow meant that calculations of heat transfer were sometimes off. Now, with this new analysis, engineers and students will be able to calculate temperature and heat flow accurately across a very wide range of flow conditions and fluids, Lienhard says. “Prediction of the heat transfer coefficient within a region where flow transitions from laminar to turbulent regime has been a big scientific challenge because of a lack of clear first principle understanding of fundamental physics,” says Andrei Fedorov, a professor of mechanical engineering at Georgia Tech, who was not involved in this work. He adds that Lienhard “carefully combed through an array of disparate experimental data for the transition region published over many decades by different researchers and came up with an amazingly effective, in its predictive power, correlation for heat transfer coefficient that spans the full range of flows from laminar to transition to turbulent.” Robert Mahan, emeritus professor of mechanical engineering at Virginia Tech, who also was not associated with this work, says Lienhard “is pointing out — and resolving — inconsistencies in the classical literature that have gone unresolved for more than a generation. When the scholarly dust settles from this brief but powerful whirlwind, it will no doubt be the updated correlations presented in this contribution that serious scholars and practicing engineers will use to predict heat transfer from flat plates.” Fluids that heat or cool surfaces make a transition from a smooth flow to a mixing, turbulent flow. A new MIT analysis shows the importance of the transition region to heat flow and temperature control. Image: Courtesy of the researchers, edited by MIT News The award will support the MIT anthropologist's research on the cultural dimensions of climate denialism.   Fri, 24 Apr 2020 16:40:01 -0400 School of Humanities, Arts, and Social Sciences Amy Moran-Thomas, the Alfred Henry and Jean Morrison Hayes Career Development Associate Professor of Anthropology at MIT, has been awarded the 2020 Levitan Prize in the Humanities. The $29,500 grant will support Moran-Thomas’s project “Mine: A Family History of Place, Race, and Planetary Health,” which Moran-Thomas writes will “excavate the cultural histories and everyday social fabrics behind the deep sedimentation of American generational identities and fossil fuel legacies.” Moran-Thomas is a cultural anthropologist whose work specializes in the human and material entanglements that shape health in practice. Her first book, “Traveling With Sugar,” was published in late 2019. That project examined the rising world diabetes epidemic as part of a 500-year bodily legacy of plantation landscapes. With “Mine,” Moran-Thomas will again investigate the tangled intersection of human and planetary health, though with a focus closer to the personal. Racialized divides in the U.S. political theater “This new project broadens out to acknowledge an interconnected story about complacency and climate change from my own home,” writes Moran-Thomas. She grew up in Pennsylvania, a contentious swing state well-known for its history of extracting coal, oil, and natural gas from the Earth. “Yet today, mining has also become a polarizing flashpoint of racialized divides in U.S. political theater: reanimating reworked tropes of ‘white working class’ voters, on one hand; and iconic of intensifying climate change and its unequal impacts on the health of people and places, on the other.” Part of what shapes the stage for this investigation is the plentiful amount of coal in American soil. The United States contains more coal than any country in the world. As experts wrote in 2016, a continued reliance on coal could be fueled by that plentiful American supply, resulting in dire outcomes for human and environmental health. The project “offers a ‘relational ethnography’ that builds outward from my family’s trajectories across generations growing up in a deeply divided swing state,” writes Moran-Thomas. “Following the various ways carbon gets embodied across scales of bodies, lives, homes, towns, infrastructures, eras, atmospheres, and landscapes, the book’s descriptions will uneasily probe larger questions of ‘slow violence’ and segregation, by populating broad terms like settler colonialism and hydrocarbon toxicity with the jarring intimacy of a kinship story.” “Amy’s work is both necessary and urgent in the face of this challenging moment in America,” says Melissa Nobles, Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences. “In a time of division, such keen and multi-faceted anthropological work opens the door to meaningful insight about our nation’s communities.” The tension between scientific evidence and cultural heritage This research speaks to the ongoing tension around the climate crisis: scientific proof, Moran-Thomas notes, continues to accumulate. However, the United States remains notoriously slow to enact policies in response to the urgent data. Underlying the political and economic tensions of this delay, it is also important to acknowledge the less-tangible cultural dimension of climate denialism. While the corporate and political interests of climate denialism may be the most visible aspect in the eyes of the public, those interests may not have the most lasting effects on the nation and the planet. Rather, beyond the hyper-rational debates of “jobs versus environment,” Moran-Thomas will consider the cultural underpinnings that have allowed anti-science propaganda to gain traction in America and continue in the face of mounting evidence. “It is a longstanding anthropological truism that the most impactful grids of culture often remain unsaid or implicit,” she writes. “Deep-seated intergenerational memories and historical investments frequently shape human practices long after pragmatic situations change.”   Story prepared by MIT SHASS Communications Editorial team: Emily Hiestand and Alison Lanier Beyond the hyper-rational debates of “jobs vs. environment” — and based on her own family heritage and home territory — MIT anthropologist Amy Moran-Thomas illuminates “the cultural underpinnings that have allowed anti-science propaganda to gain traction in America and continue in the face of mounting evidence.” Photo: Jon Sachs/SHASS Communications Senior Research Scientist Marija Ilic is making electric energy systems future-ready. Thu, 23 Apr 2020 14:35:01 -0400 Grace Chua | Laboratory for Information and Decision Systems Marija Ilic — a senior research scientist at the Laboratory for Information and Decision Systems, affiliate of the MIT Institute for Data, Systems, and Society, senior staff in MIT Lincoln Laboratory’s Energy Systems Group, and Carnegie Mellon University professor emerita — is a researcher on a mission: making electric energy systems future-ready.Since the earliest days of streetcars and public utilities, electric power systems have had a fairly standard structure: for a given area, a few large generation plants produce and distribute electricity to customers. It is a one-directional structure, with the energy plants being the only source of power for many end users.Today, however, electricity can be generated from many and varied sources — and move through the system in multiple directions. An electric power system may include stands of huge turbines capturing wild ocean winds, for instance. There might be solar farms of a hundred megawatts or more, or houses with solar panels on their roofs that some days make more electricity than occupants need, some days much less. And there are electric cars, their batteries hoarding stored energy overnight. Users may draw electricity from one source or another, or feed it back into the system, all at the same time. Add to that the trend toward open electricity markets, where end users like households can pick and choose the electricity services they buy depending on their needs. How should systems operators integrate all these while keeping the grid stable and ensuring power gets to where it is needed?To explore this question, Ilic has developed a new way to model complex power systems. Electric power systems, even traditional ones, are complex and heterogeneous to begin with. They cover wide geographical areas and have legal and political barriers to contend with, such as state borders and energy policies. In addition, all electric power systems have inherent physical limitations. For instance, power does not flow in a set path in an electric grid, but rather along all possible paths connecting supply to demand. To maintain grid stability and quality of service, then, the system must control for the impact of interconnections: a change in supply and demand at one point in a system changes supply and demand for the other points in the system. This means there is much more complexity to manage as new sources of energy (more interconnections) with sometimes unpredictable supply (such as wind or solar power) come into play. Ultimately, however, to maintain stability and quality of service, and to balance supply and demand within the system, it comes down to a relatively simple concept: the power consumed and the rate at which it is consumed (plus whatever is lost along the way), must always equal the power produced and the rate at which it is produced. Using this simpler concept to manage the complexities and limitations of electric power systems, Ilic is taking a non-traditional approach: She models the systems using information about energy, power, and ramp rate (the rate at which power can increase over time) for each part of the system — distributing decision-making calculations into smaller operational chunks. Doing this streamlines the model but retains information about the system’s physical and temporal structure. “That’s the minimal information you need to exchange. It’s simple and technology-agnostic, but we don’t teach systems that way.” She believes regulatory organizations such as the Federal Energy Regulatory Commission and North American Energy Reliability Corporation should have standard protocols for such information exchanges, just as internet protocols govern how data is exchanged on the internet. “If you were to [use a standard set of] specifications like: what is your capacity, how much does it vary over time, how much energy do you need and within what power range — the system operator could integrate different sources in a much simpler way than we are doing now.”  Another important aspect of Ilic’s work is that her models lend themselves to controlling the system with a layer of sensor and communications technologies. This uses a framework she developed called Dynamic Monitoring and Decision Systems framework, or DyMonDS. The data-enabled decision-making concept has been tested using real data from Portugal’s Azores Islands, and since applied to real-world challenges. After so many years it appears that her new modeling approach fittingly supports DyMonDS design, including systematic use of many theoretical concepts used by the LIDS community in their research. One such challenge included work on Puerto Rico’s power grid. Ilic was the technical lead on a Lincoln Laboratory project on designing future architectures and software to make Puerto Rico’s electric power grid more resilient without adding much more production capacity or cost. Typically, a power grid’s generation capacity is scheduled in a simple, brute-force way, based on weather forecasts and the hottest and coldest days of the year, that doesn’t respond sensitively to real-time needs. Making such a system more resilient would mean spending a lot more on generation and transmission and distribution capacity, whereas a more dynamic system that integrates distributed microgrids could tame the cost, Ilic says: “What we are trying to do is to have systematic frameworks for embedding intelligence into small microgrids serving communities, and having them interact with large-scale power grids. People are realizing that you can make many small microgrids to serve communities rather than relying only on large scale electrical power generation.” Although this is one of Ilic’s most recent projects, her work on DyMonDS can be traced back four decades, to when she was a student at the University of Belgrade in the former country of Yugoslavia, which sent her to the United States to learn how to use computers to prevent blackouts. She ended up at Washington University in St. Louis, Missouri, studying with applied mathematician John Zaborszky, a legend in the field who was originally chief engineer of Budapest’s municipal power system before moving to the United States. (“The legend goes that in the morning he would teach courses, and in the afternoon he would go and operate Hungarian power system protection by hand.”) Under Zaborszky, a systems and control expert, Ilic learned to think in abstract terms as well as in terms of physical power systems and technologies. She became fascinated by the question of how to model, simulate, monitor, and control power systems — and that’s where she’s been ever since. (Although, she admits as she uncoils to her full height from behind her desk, her first love was actually playing basketball.) Ilic first arrived at MIT in 1987 to work with the late professor Fred Schweppe on connecting electricity technologies with electricity markets. She stayed on as a senior research scientist until 2002, when she moved to Carnegie Mellon University (CMU) to lead the multidisciplinary Electric Energy Systems Group there. In 2018, after her consulting work for Lincoln Lab ramped up, she retired from CMU to move back to the familiar environs of Cambridge, Massachusetts. CMU’s loss has been MIT’s gain: In fall 2019, Ilic taught a course in modeling, simulation, and control of electric energy systems, applying her work on streamlined models that use pared-down information. Addressing the evolving needs of electric power systems has not been a “hot” topic, historically. Traditional power systems are often seen by the academic community as legacy technology with no fundamentally new developments. And yet when new software and systems are developed to help integrate distributed energy generation and storage, commercial systems operators regard them as untested and disruptive. “I’ve always been a bit on the sidelines from mainstream power and electrical engineering because I’m interested in some of these things,” she remarks. However, Ilic’s work is becoming increasingly urgent. Much of today’s power system is physically very old and will need to be retired and replaced over the next decade. This presents an opportunity for innovation: the next generation of electric energy systems could be built to integrate renewable and distributed energy resources at scale — addressing the pressing challenge of climate change and making way for further progress. “That’s why I’m still working, even though I should be retired.” She smiles. “It supports the evolution of the system to something better.” Marija Ilic — a senior research scientist at the Laboratory for Information and Decision Systems, affiliate of the MIT Institute for Data, Systems, and Society, senior staff in MIT Lincoln Laboratory’s Energy Systems Group, and Carnegie Mellon University professor emerita — is a researcher on a mission: making electric energy systems future-ready. Photo: MIT LIDS The Electricity Strategy Game is a prominent feature in 15.0201/14.43 (Economics of Energy, Innovation, and Sustainability). Mon, 13 Apr 2020 08:50:01 -0400 Kathryn Luu | MIT Energy Initiative Jing Li, an assistant professor of applied economics in the MIT Sloan School of Management, stands at the front of the classroom and encourages her undergraduate students to dig deeper. “Why was this a good idea?” she prompts. “How did people come up with these numbers?” It’s the second-to-last day of class, and the students in 15.0201/14.43 (Economics of Energy, Innovation, and Sustainability) are discussing their teams’ results and the logic behind the decisions they made in the Electricity Strategy Game — a main feature of this elective. “[With] so much magic,” a student quips in response to Li’s question, to a chorus of laughter. The real magic, they all know, is in Li’s approach to teaching: She holds her students accountable for their conclusions and throws them head-first into challenging problems to help them confidently engage with the complexities of energy economics. “She didn’t baby us with tiny data sets. She gave us the real deal,” says Wilbur Li, a senior computer science major and mechanical engineering minor (no relation to Jing Li). He initially took the class to round out his fall semester schedule, unsure if he would keep it due to a rigorous class load. However, just a couple of weeks into the semester, he was sold on the class. “It’s one of those classes at MIT that isn’t really a requirement for anyone, but it’s a class that only draws people who are genuinely interested in the subject area,” he says. “That made for really good discussions. You could tell that people were interested beyond an academic sense.” 15.0201/14.43, a part of MITEI’s interdisciplinary Energy Studies minor, is a relatively new course. The class, which is also offered as graduate-level course 15.020, made its debut in the spring 2019 semester and was developed to expand the energy economics offerings at MIT. Part of the motivation for creating 15.0201/14.43 stemmed from the fact that Professor Christopher Knittel’s course, 15.037/15.038 (Energy Economics and Policy), is consistently in high demand, without enough supply to accommodate interested students. “Professor Knittel and I have positioned our two courses so that someone who wants to get a taste of energy economics could take either one and come away with a good mental map of the field, but also that someone who is very serious about a future career in energy would find it useful to take both,” says Li. Li’s class focuses on innovation and employs environmental economics principles and business cases to explore the development and adoption of new technology, and business strategies related to sustainability. “The class has been particularly attractive to students who are interested in the energy landscape, such as how energy markets impact and relate to local environmental issues and how to provide energy to parts of the globe that currently lack access to affordable or reliable energy,” she says. “It has also appealed to students interested in applied microeconomics.” In addition to crunching large data sets and bringing in guest speakers, such as Paul Joskow, the Elizabeth and James Killian Professor of Economics Emeritus and chair of MIT’s Department of Economics, a major element of the class — and a runaway favorite of many of the students — is the Electricity Strategy Game. The game was created by professors Severin Borenstein and James Bushnell for the University of California at Berkley’s Haas School of Business. The game is designed to replicate the world of deregulated wholesale electricity markets. Players are divided into firms and utilize electricity generation portfolios, based on actual portfolios of the largest generation firms in the California market, to compete in a sequence of daily electricity spot markets, in which commodities are traded for immediate delivery. Each portfolio contains differing generation technologies (thermal, nuclear, and hydro), with varying operating costs. Spot market conditions vary from hour to hour and day to day. Players must develop strategies to deploy their assets over a sequence of spot markets while accounting for the cost structure of their portfolio, varying levels of hourly electricity demand, and strategies of other players. The game is conducted in six rounds, with the second half of the game taking into account carbon permits. Winners are determined by the financial performance of their firm and an evaluation of the logic of the firm’s actions, which the teams describe in a series of memos to Li. “I loved the Electricity Strategy Game! It was really fun to have to figure out how to predict demand and then how to price supply accordingly,” says Anupama Phatak, a junior mechanical engineering major and economics minor. “The bid for portfolios was also a really cool process. I put a lot of time and effort into understanding the game and developing a strategy, so it made the process all the more rewarding when my team won.” Wilbur Li echoed Phatak’s enthusiasm. “My favorite part of the game was definitely the auction — it was the most exciting part,” he says. “Every single group did research on their own to figure out what sort of bidding prices they wanted for each piece of property [power plants] — and when we showed up, every single group had wildly different final prices for what we bid on the plants.” For Isaac Perper, a senior mechanical engineering and computer science double major and economics minor, the value of the game was in getting a glimpse of how energy portfolios would play out in real-life auctions. “We all had different portfolios, so I think that was the most interesting part. We got to see differences between coal, hydro, and gas plants and the different price points at which they are profitable. I think the auction mirrored what you would expect in a real market,” he says. Many of the students who took 14.43 (Economics of Energy, Innovation, and Sustainability) are making it their mission to apply the lessons learned from the class to their career goals. The class helped inspire Wilbur Li to pursue a career in cleantech product development, such as working on smart meters or more efficient transportation for wind turbine blades. “A class like 14.43 definitely helps with understanding how the products that are being worked on can be scaled in terms of figuring out which players in the economy would want to pick up and utilize a product,” he says. “It has given me a deeper understanding of how technology scales on a market level, as well as how to understand and account for the target impact of those technologies.” Phatak says that the class has made her more conscious of the adverse environmental consequences of products such as palm oil. “I now understand that even the smallest ingredient in our everyday products can have negative impacts around the world that I might not even see,” she says. Because of the topics covered in Li’s course, Phatak is now actively pursuing internships in sustainability. Perper shared that the class opened his eyes to a lot of inefficiencies that exist in the energy market today. Indeed, he says that his life’s goal is to help to solve some of those inefficiencies. “Going into this class, I had kind of thought that we have our different electricity producers and some pollute more than others, but in terms of the actual market structure and how electricity is distributed, paid for, and expanded into developing areas, all of those things were more complicated and inefficient than I had expected,” he says. When he returns to MIT in the fall to pursue his master’s degree in computer science and electrical engineering, Perper will be thinking more about the bigger questions in terms of energy policy and technology. Li says she hopes that students come away from 14.43 with “more questions than answers,” as well as a honed sense of which questions are worth spending time to answer. She also aims for her students to leave with the knowledge that sustainability and energy touch every organization in some way. “Whatever kind of organization you are a part of and the role you take in that organization — investor, manager, employee, customer, voter — you can contribute to the sustainability goals of your organization with your ideas, voice, and actions,” she says. Jing Li, an assistant professor of applied economics, engages with her students during the Electricity Strategy Game debrief. Photo: Kelley Travers Meet the team of postdocs developing the MIT Energy Initiative's energy life-cycle assessment tool. Thu, 09 Apr 2020 23:59:59 -0400 Kathryn Luu | MIT Energy Initiative For Naga Srujana Goteti, a postdoc at the MIT Energy Initiative (MITEI), finding a meaningful career has required starting from scratch — three times. She first worked as a software engineer after earning a bachelor’s degree in electrical engineering, but after just one year in the role, she felt like something was missing. Her father, a civil engineer, encouraged her to look for meaning and opportunity in the energy space. Goteti dropped her software engineering position to perform energy audits on buildings as an intern at a small startup, where a visiting professor inspired her to move from India to Thailand to pursue a master’s degree in energy. After completing her graduate program, she still wasn’t quite sure where she could best apply her interdisciplinary background. She ended up joining an oil and gas company to work on pipelines. “I really enjoyed the salary, of course, but after one year, I went back to my original question of, ‘What’s the purpose of this?’ I wasn’t finding any meaning in the work I was doing,” she says. She quit her oil and gas job, desperately seeking leads on research opportunities in clean energy. At one point, she was cold-emailing “at least 30 professors per day.” Her efforts paid off when a team at the Rochester Institute of Technology (RIT) informed her that they were seeking someone with an electrical engineering background, plus experience with energy and sustainability, to join them on a special project. Goteti fit the bill and came to the United States to pursue her PhD in sustainability. “After getting my PhD at RIT, I really wanted to work on an interdisciplinary team that was focused on reducing carbon dioxide emissions by looking at multiple renewable energy pathways, instead of just one,” she says. This quest led her to MITEI. Goteti has teamed up with Tapajyoti (TJ) Ghosh, a chemical engineer from India, and Maryam Arbabzadeh, a life-cycle assessment (LCA) practitioner from Iran — an interdisciplinary group of postdocs who came to MIT to work on a novel energy assessment tool called the Sustainable Energy System Analysis Modeling Environment (SESAME). Many of the energy assessment tools that exist today zero in on only one slice of the energy pie. They offer a granular analysis of solar, wind, or nuclear, but rarely look at multiple pathways together, which limits the ability of policymakers and industry professionals to see the real-time impacts of various technologies across the energy landscape as a whole. “Today, we are facing a dual challenge: satisfying growing energy demand while reducing emissions,” says Emre Gençer, a research scientist at MITEI and leader of the SESAME project. “The composition and operation of energy systems determine our ability to meet this challenge. We developed SESAME to study all energy sectors at the pathway and system levels.” SESAME, which has been under development at MIT since 2017, enables users to understand the impact of all relevant technological, operational, temporal, and geospatial variables to the evolving energy system. Several existing LCA models require expert help in order to parse the data in a way that is useful for policymakers and industry. The SESAME tool aims to bridge that divide, so that experts and laypeople alike can understand today’s energy landscape and make informed decisions about the best paths forward. Gençer spent almost a year assembling the perfect team of postdocs to help expand the tool. MITEI’s SESAME team is now about 20 people strong, including postdocs Arbabzadeh, Ghosh, and Goteti, plus a mix of undergraduates, graduate students, research scientists, and PhD candidates from across MIT. “The multidisciplinary nature of this project requires a strong mix of research backgrounds,” says Gençer. “We weren’t just looking within one specific discipline to build the SESAME team. We were very intentional about bringing together researchers with experience in different areas such as engineering, environmental studies and sustainability, and economics.” MITEI’s goal is to develop SESAME as an open-source web application — one that incorporates energy case studies from around the globe and takes into account heterogeneity of data across regions. These expansion directions are where Arbabzadeh, Ghosh, and Goteti come in. Pathways to MITEI As a child, Tapajyoti Ghosh visited oil fields and gas facilities across India with his father — “an oil and gas man.” He remembers wondering, “What’s going to happen when all the oil is gone?” His interest in the environmental impacts of society’s dependence on fossil fuels and finding sustainable alternatives came later. Ghosh earned his bachelor’s degree in chemical engineering in 2014 from Jadavpur University in Kolkata (India), after which he came to the United States for a PhD program at Ohio State University. “I had no idea what research direction I was interested in,” he says. “I spent the first semester of my PhD program deciding what I wanted to focus on.” But then a conversation with his professor set Ghosh down the energy path by introducing him to sustainable engineering. “Sustainable engineering focuses on trying to reduce negative environmental impacts that are caused by engineering processes not considering the external impacts of their activities,” he says. “I was interested in figuring out how we can make our engineering processes take environmental impacts into account during the design process, while also helping industry make profits.” He came to MITEI for the opportunity to work on SESAME, which he saw as a groundbreaking environmental impact assessment tool that will help industry and policymakers. He hopes to return to India someday to become faculty at a university there. “Completing my postdoctorate at MIT will have a huge impact on my future,” he says. “I received several offers from other universities and research centers, but for me, the lure was getting to be at MIT. I feel like I’m in the Hollywood of academia.” Ghosh’s role at MITEI is to literally “open SESAME” — he is working to convert the tool from a MATLAB application, which is a proprietary programming language developed by MathWorks, to an open-source web application, which will make SESAME available for all to use. This is a Herculean effort; the SESAME platform was designed with a modular structure to allow the analysis of a very large number of conventional and novel pathways — more than 1,000 energy pathways are embedded in the framework, capturing about 90 percent of energy-related emissions data. SESAME’s framework provides multiple functionalities for various energy stakeholders in a single tool. For example, those working in industry or policy can compare technology options, perform technology and system scenario analysis, or explore the impacts of market and policy dynamics; or energy experts can see comprehensive cross-technology comparisons. Ghosh needs to translate all of this into Python coding language for the open-source version of the tool. “I’m also working with an undergraduate student to gather additional environmental impact data that we can add to the tool, and adding new pathways for analysis,” he says. Some of those new pathways include the production of ammonia, cement, iron, and steel. Maryam Arbabzadeh received her bachelor’s in electrical engineering, with a focus on power systems, from Amirkabir University of Technology (Tehran Polytechnic) in Iran. “That’s where I started learning about renewable energy. I did my undergraduate thesis on wind energy and developing tools so users could suggest the most effective locations for installing wind turbines,” she says. When it came time for her master’s degree, Arbabzadeh knew she wanted to continue studying electrical engineering, with a focus on energy systems. She came to the United States to attend the State University of New York at Buffalo (SUNY Buffalo), which sparked her interest in finding ways to reduce the negative environmental impacts of power generation. “At SUNY Buffalo, I took classes on sustainable energy systems and climate change, and that’s where I first discovered that one of the main sources for environmental emissions is the electricity sector/power grid. I became really interested in learning about that aspect of electricity production,” she explains. From there, she applied to multidisciplinary programs for her PhD, landing at the University of Michigan’s School for Environment and Sustainability to work on an energy storage project. “My PhD advisors were looking for someone with a background in electrical engineering and who was also interested in learning about energy and life-cycle analysis, so it was perfect for me,” she says. “My dissertation mainly focused on energy storage and technologies, but at a high level — all about the optimization of the power grid and how addition of emerging technologies such as energy storage would affect the grid.” She was drawn to the SESAME project because, during her experiences as an LCA practitioner and power systems modeler, she faced a number of challenges when it came to gathering all the disparate pieces of data she needed in order to form a comprehensive energy picture. “It is interesting for me to develop a tool that can bring together all of these pieces from different sectors — power, transportation, et cetera — and that will allow policymakers and energy modelers to use the tool to do a system analysis for themselves,” she says. SESAME currently contains data from North American case studies. Arbabzadeh’s portion of the project, which is funded by the International Energy Agency Gas and Oil Technology Collaboration Program, involves identifying international case studies that can be incorporated into SESAME to help the tool expand beyond the United States. “Emre asked me to focus specifically on examining global perspectives, so it has been fascinating for me to see how our analysis might apply to other locations.” So far, Arbabzadeh has focused her efforts on identifying potential case studies, including Norway/Northwest Europe and Singapore. “In contrast to the United States, Norway’s energy sector is already clean because a huge amount of its power generation comes from hydro, with some support from wind and thermal,” she says. Norwegian exports to continental Europe can enable displacement of coal with natural gas and reduce greenhouse gas emissions.“Oil and gas production is the main contributor to greenhouse gas emissions in Norway, followed by industry and transportation,” Arbabzadeh says. She is working with an undergraduate student to examine the production activities and their associated emissions from about 90 oil and gas fields across Norway. These emissions can be more than offset by planned innovative offshore CO2 capture and sequestration projects that will permanently store CO2 captured from various sources across Europe.This information could help Norway and other oil and gas producers from around the world to learn about how they can reduce greenhouse gas emissions in their processes and associated systems. Arbabzadeh’s interest in contributing to the SESAME tool spans beyond professional motivation: “Personally, I want to make an impact. I was very interested in coming to MITEI to work on this tool because when it becomes available, it will really make a difference in various sectors and will be so useful for stakeholders working across the energy space.” Her work on the tool complements that of Goteti, who has been charged with capturing the heterogeneity of the data from across the United States, starting with power systems. “This is a major challenge, because it requires cross-linking all of the databases across the country so that they can be integrated into the SESAME platform,” says Goteti. “I’m aiming to automate this process so that it can be used in SESAME as part of our analysis to show, for example, how California is different from Texas.” Arbabzadeh and Goteti use similar research models for their respective parts of the project. “There is a weird disconnect between different types of energy researchers. LCA people look at energy models differently from complex energy system modelers. Complex energy system models typically examine one area in minute detail, while LCA looks across the spectrum,” says Goteti. “SESAME fills the operations versus lifecycle gap in many of today’s models by offering all energy experts a holistic approach to analyzing all of the systems together.” Collaborating to “open SESAME” When asked how their roles work together, they offer a deceptively simple explanation: “TJ is working on the framework and gives us direction in terms of what specific data the tool needs, and that’s what Srujana and I try to collect,” says Arbabzadeh. The reality of the scope of the work with which they’ve each been charged, and the level of collaboration that is required in order to meet those goals, is much more complex. The interdependent nature of their work requires that they check in with each other daily, as progress on the project relies on results from each person. The information Ghosh receives from Goteti and Arbabzadeh is integral to the SESAME expansion; and, in turn, Goteti and Arbabzadeh rely on direction from Ghosh in order to procure the data and to provide it in a SESAME-compatible format. The team is also working to incorporate other environmental impact categories in the model — beyond just greenhouse gas emissions, which the tool currently focuses on. They are hoping to include factors such as water impacts, air pollutants, and land use. Although they have different research backgrounds, Goteti explains, “We have a common thread of experience with LCA, so we have a basic understanding of what the SESAME team is doing across the board, even though we may not know what each other is working on at a granular level.” In addition to supporting each other as they bring their various areas of expertise to bear on their work at MITEI, they have found support from MITEI and from the MIT postdoc community as a whole. “MITEI is a unique center where people talk to each other about what they are working on, even to others outside their project. Research scientists, postdocs, and students sit together to discuss our research, and we actually provide each other with valuable input,” remarks Goteti. “There’s not a tunnel view of your own projects; everyone is open to helping others, which is not the case at more corporate places, where it’s a spirit of ‘my project versus your project.’” Arbabzadeh concurs: “In addition to collaboration on research, I was surprised to learn how postdocs are acknowledged here at MIT. There are a lot of professional development resources and even a career advisor specifically for postdocs! It was unique for me to see how postdocs are treated here.” Both Goteti and Ghosh also value the proximity to leading faculty, within MITEI and across MIT. “Getting inside access to faculty is a big deal for me, coming from India,” says Goteti. “I feel fortunate to have a peek into this very exclusive world. These kinds of opportunities — being able to engage with the world’s leading researchers, CEOs, et cetera — were just not available to me at my previous institutions,” says Ghosh. “The founder of Zipcar sits right across from my cubicle! There is a Nobel Prize-winning lab on the floor below ours.” Each member of the trio will present results from their portions of the SESAME tool expansion at various conferences and has contributed to a series of journal articles about the tool, which they anticipate will be published over the coming year. Contributing to the low-carbon energy transition Coming from different backgrounds, Arbabzadeh, Ghosh, and Goteti are united by their desire to devote their areas of expertise to pushing forward cutting-edge clean energy solutions. Ghosh would like to answer the questions he’s had since his childhood, first sparked by his visits to oil fields with his dad. “What’s going to replace the world’s dependence on fossil fuels? What is the cleanest form of energy that humans can depend on for a long amount of time? Is it going to be Tony Stark’s arc reactor or some nuclear fusion reactor like a tokamak, or just solar and wind, or some other miracle solution? I’m interested in answering futuristic questions like these,” says Ghosh. Arbabzadeh is focused on using her experience to solve environmental challenges for the benefit of our planet and its future inhabitants. “During my master’s degree work, I learned about the negative environmental impacts of electricity production, and wondered: ‘How can we improve this? I have the skills and knowledge, but how can I do more?’” adds Arbabzadeh. “Trying to solve these challenges is exciting for me. I think the energy sector is very important in terms of climate change, and the decarbonization of power production is an area where we can truly make a positive impact for future generations.” In energy, Goteti has found the meaningful work that she craved for all those years; there will be no more starting over. “Governments collapse because of energy; economies are driven by energy —  I think energy is the backbone for so many things that we don’t realize,” says Goteti. “That excites me a lot. As a kid, I was never into politics, but now I watch the news all the time and understand that everything is related. At the end of the day, it’s all about energy.” Support for the SESAME tool has been provided by ExxonMobil and the International Energy Agency Gas and Oil Technology Collaboration Program. The SESAME online beta is expected in mid-2020. Sign up to become a beta user at Left to right: Tapajyoti Ghosh, Naga Srujana Goteti, Emre Gençer, and Maryam Arbabzadeh Photo: Kelley Travers Congestion control system could help streaming video, mobile games, and other applications run more smoothly. Thu, 09 Apr 2020 23:59:59 -0400 Rob Matheson | MIT News Office MIT researchers have designed a congestion-control scheme for wireless networks that could help reduce lag times and increase quality in video streaming, video chat, mobile gaming, and other web services.To keep web services running smoothly, congestion-control schemes infer information about a network’s bandwidth capacity and congestion based on feedback from the network routers, which is encoded in data packets. That information determines how fast data packets are sent through the network.Deciding a good sending rate can be a tough balancing act. Senders don’t want to be overly conservative: If a network’s capacity constantly varies from, say, 2 megabytes per second to 500 kilobytes per second, the sender could always send traffic at the lowest rate. But then your Netflix video, for example, will be unnecessarily low-quality. On the other hand, if the sender constantly maintains a high rate, even when network capacity dips, it could  overwhelm the network, creating a massive queue of data packets waiting to be delivered. Queued packets can increase the network’s delay, causing, say, your Skype call to freeze.Things get even more complicated in wireless networks, which have “time-varying links,” with rapid, unpredictable capacity shifts. Depending on various factors, such as the number of network users, cell tower locations, and even surrounding buildings, capacities can double or drop to zero within fractions of a second. In a paper at the USENIX Symposium on Networked Systems Design and Implementation, the researchers presented “Accel-Brake Control” (ABC), a simple scheme that achieves about 50 percent higher throughput, and about half the network delays, on time-varying links.The scheme relies on a novel algorithm that enables the routers to explicitly communicate how many data packets should flow through a network to avoid congestion but fully utilize the network. It provides that detailed information from bottlenecks — such as packets queued between cell towers and senders — by repurposing a single bit already available in internet packets. The researchers are already in talks with mobile network operators to test the scheme.“In cellular networks, your fraction of data capacity changes rapidly, causing lags in your service. Traditional schemes are too slow to adapt to those shifts,” says first author Prateesh Goyal, a graduate student in CSAIL. “ABC provides detailed feedback about those shifts, whether it’s gone up or down, using a single data bit.”Joining Goyal on the paper are Anup Agarwal, now a graduate student at Carnegie Melon University; Ravi Netravali, now an assistant professor of computer science at the University of California at Los Angeles; Mohammad Alizadeh, an associate professor in MIT’s Department of Electrical Engineering (EECS) and CSAIL; and Hari Balakrishnan, the Fujitsu Professor in EECS. The authors have all been members of the Networks and Mobile Systems group at CSAIL.Achieving explicit controlTraditional congestion-control schemes rely on either packet losses or information from a single “congestion” bit in internet packets to infer congestion and slow down. A router, such as a base station, will mark the bit to alert a sender — say, a video server — that its sent data packets are in a long queue, signaling congestion. In response, the sender will then reduce its rate by sending fewer packets. The sender also reduces its rate if it detects a pattern of packets being dropped before reaching the receiver.In attempts to provide greater information about bottlenecked links on a network path, researchers have proposed “explicit” schemes that include multiple bits in packets that specify current rates. But this approach would mean completely changing the way the internet sends data, and it has proved impossible to deploy. “It’s a tall task,” Alizadeh says. “You’d have to make invasive changes to the standard Internet Protocol (IP) for sending data packets. You’d have to convince all Internet parties, mobile network operators, ISPs, and cell towers to change the way they send and receive data packets. That’s not going to happen.”With ABC, the researchers still use the available single bit in each data packet, but they do so in such a way that the bits, aggregated across multiple data packets, can provide the needed real-time rate information to senders. The scheme tracks each data packet in a round-trip loop, from sender to base station to receiver. The base station marks the bit in each packet with “accelerate” or “brake,” based on the current network bandwidth. When the packet is received, the marked bit tells the sender to increase or decrease the “in-flight” packets — packets sent but not received — that can be in the network.If it receives an accelerate command, it means the packet made good time and the network has spare capacity. The sender then sends two packets: one to replace the packet that was received and another to utilize the spare capacity. When told to brake, the sender decreases its in-flight packets by one — meaning it doesn’t replace the packet that was received.Used across all packets in the network, that one bit of information becomes a powerful feedback tool that tells senders their sending rates with high precision. Within a couple hundred milliseconds, it can vary a sender’s rate between zero and double. “You’d think one bit wouldn’t carry enough information,” Alizadeh says. “But, by aggregating single-bit feedback across a stream of packets, we can get the same effect as that of a multibit signal.”Staying one step aheadAt the core of ABC is an algorithm that predicts the aggregate rate of the senders one round-trip ahead to better compute the accelerate/brake feedback.The idea is that an ABC-equipped base station knows how senders will behave — maintaining, increasing, or decreasing their in-flight packets — based on how it marked the packet it sent to a receiver. The moment the base station sends a packet, it knows how many packets it will receive from the sender in exactly one round-trip’s time in the future. It uses that information to mark the packets to more accurately match the sender’s rate to the current network capacity.In simulations of cellular networks, compared to traditional congestion control schemes, ABC achieves around 30 to 40 percent greater throughput for roughly the same delays. Alternatively, it can reduce delays by around 200 to 400 percent by maintaining the same throughput as traditional schemes. Compared to existing explicit schemes that were not designed for time-varying links, ABC reduces delays by half for the same throughput. “Basically, existing schemes get low throughput and low delays, or high throughput and high delays, whereas ABC achieves high throughput with low delays,” Goyal says.Next, the researchers are trying to see if apps and web services can use ABC to better control the quality of content. For example, “a video content provider could use ABC’s information about congestion and data rates to pick the resolution of streaming video more intelligently,” Alizadeh says. “If it doesn’t have enough capacity, the video server could lower the resolution temporarily, so the video will continue playing at the highest possible quality without freezing.” To reduce lag times and increase quality in video streaming, mobile gaming, and other web services, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed a congestion-control scheme for time-varying wireless links, such as cellular networks. Image: José-Luis Olivares, MIT MIT biochemists can trap and visualize an enzyme as it becomes active — an important development that may aid in future drug development. Mon, 30 Mar 2020 13:40:01 -0400 Raleigh McElvery | Department of Biology How do you capture a cellular process that transpires in the blink of an eye? Biochemists at MIT have devised a way to trap and visualize a vital enzyme at the moment it becomes active — informing drug development and revealing how biological systems store and transfer energy. The enzyme, ribonucleotide reductase (RNR), is responsible for converting RNA building blocks into DNA building blocks, in order to build new DNA strands and repair old ones. RNR is a target for anti-cancer therapies, as well as drugs that treat viral diseases like HIV/AIDS. But for decades, scientists struggled to determine how the enzyme is activated because it happens so quickly. Now, for the first time, researchers have trapped the enzyme in its active state and observed how the enzyme changes shape, bringing its two subunits closer together and transferring the energy needed to produce the building blocks for DNA assembly. Before this study, many believed RNR’s two subunits came together and fit with perfect symmetry, like a key into a lock. “For 30 years, that’s what we thought,” says Catherine Drennan, an MIT professor of chemistry and biology and a Howard Hughes Medical Institute investigator. “But now, we can see the movement is much more elegant. The enzyme is actually performing a ‘molecular square dance,’ where different parts of the protein hook onto and swing around other parts. It’s really quite beautiful.” Drennan and JoAnne Stubbe, professor emerita of chemistry and biology at MIT, are the senior authors on the study, which appeared in the journal Science on March 26. Former graduate student Gyunghoon “Kenny” Kang PhD ’19 is the lead author. All proteins, including RNR, are composed of fundamental units known as amino acids. For over a decade, Stubbe’s lab has been experimenting with substituting RNR’s natural amino acids for synthetic ones. In doing so, the lab realized they could trap the enzyme in its active state and slow down its return to normal. However, it wasn’t until the Drennan lab gained access to a key technological advancement — cryo-electron microscopy — that they could snap high-resolution images of these “trapped” enzymes from the Stubbe lab and get a closer look. “We really hadn’t done any cryo-electron microscopy at the point that we actively started trying to do the impossible: get the structure of RNR in its active state,” Drennan says. “I can’t believe it worked; I’m still pinching myself.” The combination of these techniques allowed the team to visualize the complex molecular dance that allows the enzyme to transport the catalytic “firepower” from one subunit to the next, in order to generate DNA building blocks. This firepower is derived from a highly reactive unpaired electron (a radical), which must be carefully controlled to prevent damage to the enzyme.  According to Drennan, the team “wanted to see how RNR does the equivalent of playing with fire without getting burned.” First author Kang says slowing down the radical transfer allowed them to observe parts of the enzyme no one had been able to see before in full. “Before this study, we knew this molecular dance was happening, but we’d never seen the dance in action,” he says. “But now that we have a structure for RNR in its active state, we have a much better idea about how the different components of the enzyme are moving and interacting in order to transfer the radical across long distances.” Although this molecular dance brings the subunits together, there is still considerable distance between them: The radical must travel 35-40 angstroms from the first subunit to the second. This journey is roughly 10 times farther than the average radical transfer, according to Drennan. The radical must then travel back to its starting place and be stored safely, all within a fraction of a second before the enzyme returns to its normal conformation. Because RNR is a target for drugs treating cancer and certain viruses, knowing its active-state structure could help researchers devise more effective treatments. Understanding the enzyme’s active state could also provide insight into biological electron transport for applications like biofuels. Drennan and Kang hope their study will encourage others to capture fleeting cellular events that have been difficult to observe in the past. “We may need to reassess decades of past results,” Drennan says. “This study could open more questions than it answers; it’s more of a beginning than an end.” This research was funded by the National Institutes of Health, a David H. Koch Graduate Fellowship, and the Howard Hughes Medical Institute. The ribonucleotide reductase (RNR) enzyme is responsible for converting RNA building blocks into DNA building blocks, and is a critical player in both DNA synthesis and repair in all organisms. Image: Gyunghoon “Kenny” Kang Fusion energy community makes unified statement on priorities in report for Department of Energy Policy Advisory Group. Wed, 18 Mar 2020 15:00:01 -0400 Peter Dunn | Plasma Science and Fusion Center The growing sense of urgency around development of fusion technology for energy production in the United States got another boost this week with the release of a community consensus report by a diverse group of researchers from academia, government labs, and industry. High among its recommendations is development of a pilot fusion power plant, an ambitious goal that would be an important step toward an American fusion energy industry. The report — the first of its kind in almost 20 years and the product of a novel 15-month collaboration process — identifies high-priority scientific needs that can help fill gaps in fusion knowledge and facilitate the drive to making fusion a practical energy source. It will be used by the U.S. Department of Energy’s Fusion Energy Sciences Advisory Committee (FESAC) as it undertakes a new phase of strategic planning for its Fusion Energy Sciences program, the primary U.S. source of fusion research funding. If successfully harnessed, fusion would fundamentally change the world’s energy grid by offering safe, abundant, carbon-free electricity production. Some 300 members of the fusion community hammered out their consensus during three major workshop meetings and hundreds of online working-group sessions, using an anonymous voting process that gave all participants the chance to express themselves freely. The top energy-related priorities include: development of a shared neutron source facility that can be used for development of critical materials and power plant designs; continued cultivation of burning plasma physics knowledge through ongoing participation in the international ITER program and expanded public-private collaboration in the United States; and Immediate pre-conceptual design of a new U.S. tokamak facility, which would begin operation by the end of the decade and support work on power extraction from exhaust heat and plasma sustainment. Also identified were several “opportunities and research needs” that are broadly applicable across the fusion and plasma fields: use of advanced computing technologies for better understanding and modeling; development of improved plasma diagnostics; enhanced support for public-private partnerships; and embracing diversity, equity, and inclusion, along with development of a more multidisciplinary workforce. “This is first time in a generation when the fusion community has been called upon to self-organize and figure out its highest priorities for getting from fusion science to fusion energy,” says Bob Mumgaard, chief executive of MIT spinout Commonwealth Fusion Systems (CFS), one of a growing number of private companies pursuing fusion. “How we can get ready, with data, experience, test facilities — the things that are needed to support the science, and eventually an industry. “The National Academies of Science (NAS) issued a good report [in late 2018], that said we should be bold and do fusion now and create test facilities,” adds Mumgaard. “But this is different because it’s the whole community, coming together in a very transparent grassroots effort to answer questions about what we’re doing, what needs to be done, and what we’re willing to not do. It wasn’t done in a back room but by scientists themselves, and they came out with a plan and priorities — it’s kind of cool.” Nathan Howard, a research scientist at MIT’s Plasma Science and Fusion Center, was one of seven co-chairs who shared development oversight of the report, which will be used in developing long-range strategic plans for fusion science programs in response to a FESAC request issued in November 2018. “The American Physical Society Division of Plasma Physics took the lead and brought together the seven of us to gather data from the community,” explains Howard. In addition to fusion energy, the effort also generated extensive recommendations for Discovery Plasma Science, a diverse field of more-basic research with impact in astrophysics, high energy density plasma physics, and other disciplines. One important development along the way was the creation of deeper linkages between the group focused on magnetic-confinement fusion and the one focused on fusion-related materials and technologies. “It really didn’t make sense for those to be separate,” notes Howard. “The merger occurred naturally during the process and was motivated in part by the NAS burning plasma report, which said the U.S. should pursue building a fusion pilot plant, a reactor that will demonstrate creation of electricity from fusion and a closed fusion fuel cycle. The fusion community adopted construction of a pilot plant as its mission during the process” While additional plasma research is important to achieving that goal, adds Howard, “the community recognized pretty clearly that we need more emphasis in fusion materials and technology. Where we’re most lacking in the progress towards a power plant is in areas such as design of the blanket [the area surrounding the reactor, used to breed fusion fuel] and fusion-relevant materials.” Many of the outstanding materials issues are applicable not only to magnetic-confinement fusion, including the tokamak-type reactors that have received the most development attention to date, but also to inertial-confinement and other approaches, which offer different opportunities and challenges. The report’s official recipient is a FESAC subcommittee chaired by Troy Carter, professor of physics at the University of California at Los Angeles and director of the university’s Basic Plasma Science Facility and Plasma Science and Technology Institute. He praised Howard and the other co-chairs for “working incredibly hard to organize the effort and bring so many people together. The report is very compelling, and the whole community should be commended — this sets an example for future iterations of the process and makes the job of my subcommittee much easier.” In particular, says Carter, “junior members of the community really stepped up. The co-chairs are junior and mid-career people for the most part, and it’s important that it’s their plan, because given the time scale, they’ll be the ones implementing it.” Carter notes that, while he knew the concept of driving aggressively toward a pilot plant had support, “I was a bit surprised at how strongly it was embraced in the process. It’s ambitious, and it points us in the direction of using innovation to get fusion energy onto the grid much quicker. There’s still a lot of work to do in core plasma physics, but we’ve also got to get working on materials and other technology, which we’re not putting enough effort towards now. It’s refreshing to see that broad support for changing direction.” Carter’s group will now incorporate the report’s findings into strategic plans reflecting several budget scenarios it has been given. “We’ll lay it all out to take advantage of the opportunities in science and push towards the goal of realizing a pilot plant. We’ve got really good information about initiatives and guidance on prioritization,” he says. “But a lot of the initiatives aren’t at the level of conceptual design, so we’ll have to do some work to figure out what they will cost. We have project management experts to work with, and also people from the private side — we have three members connected to private fusion companies, and will also engage other external points of view.” That process is expected to take about eight months, says Carter, with the results being submitted to FESAC around year end. After a vote, it would become FESAC’s official advice to the Department of Energy. “It’s something a lot of folks in Congress are interested in,” notes Carter. CFS’s Mumgaard says the report’s delivery could prove to be a key moment for the United States, with the potential to lead to a new fusion policy, Congressional action to support the nascent fusion industry and prepare for power plant licensing and regulation, and ongoing funding that would give academic and national laboratory leaders confidence to hire staff and build infrastructure. “It feels like things are going in the right direction,” he says. “The scientific community has to speak with one voice, and this is the process that creates that voice.” A fusion community report recommends three science drivers and several new facilities to accelerate toward commercially relevant fusion power. Image: MIT Plasma Science and Fusion Center With support from renewable energy sources, the MIT research scientist says, we can consider hydrogen fuel as a tool for decarbonization. Thu, 05 Mar 2020 15:05:01 -0500 Nafisa Syed | MIT Energy Initiative As the world increasingly recognizes the need to develop more sustainable and renewable energy sources, low-carbon hydrogen has reemerged as an energy carrier with the potential to play a key role in sectors from transportation to power. At MITEI’s 2019 Spring Symposium, MIT Energy Initiative Research Scientist Emre Gençer gave a presentation titled “Hydrogen towards Deep Decarbonization,” in which he elaborated on how hydrogen can be used across all energy sectors. Other themes discussed by experts at the symposium included industry’s role in promoting hydrogen, public safety concerns surrounding the hydrogen infrastructure, and the policy landscape required to scale hydrogen around the world. Here, Gençer shares his thoughts on the history of hydrogen and how it could be incorporated into our energy system as a tool for deep decarbonization to address climate change. Q: How has public perception of hydrogen changed over time? A: Hydrogen has been in the public imagination since the 1870s. Jules Verne wrote that “water will be the coal of the future” in his novel “The Mysterious Island.” The concept of hydrogen has persisted in the public imagination for over a century, though interest in hydrogen has changed over time. Initial conversations about hydrogen focused on using it to supplement depleting fuel sources on Earth, but the role of hydrogen is evolving. Now we know that there is enough fuel on Earth, especially with the support of renewable energy sources, and that we can consider hydrogen as a tool for decarbonization. The first “hydrogen economy” concept was introduced in the 1970s. The term “hydrogen economy” refers to using hydrogen as an energy carrier, mostly for the transportation sector. In this context, hydrogen can be compared to electricity. Electricity requires a primary energy source and transmission lines to transmit electrons. In the case of hydrogen, energy sources and transmission infrastructure are required to transport protons. In 2004, there was a big initiative in the U.S. to involve hydrogen in all energy sectors to ensure access to reliable and safe energy sources. That year, the National Research Council and National Academy of Engineering released a report titled “The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs.” This report described how hydrogen could be used to increase energy security and reduce environmental impacts. Because its combustion yields only water vapor, hydrogen does not produce carbon dioxide (CO2) emissions. As a result, we can really benefit from eliminating CO2 emissions in many of its end-use applications. Today, hydrogen is primarily used in industry to remove contaminants from diesel fuel and to produce ammonia. Hydrogen is also used in consumer vehicles with hydrogen fuel cells, and countries such as Japan are exploring its use in public transportation. In the future, there is ample room for hydrogen in the energy space. Some of the work I completed for my PhD in 2015 involved researching efficient hydrogen production via solar thermal and other renewable sources. This application of renewable energy is now coming back to the fore as we think about “deep decarbonization.”Q: How can hydrogen be incorporated into our energy system? A: When we consider deep decarbonization, or economy-wide decarbonization, there are some sectors that are hard to decarbonize with electricity alone. They include heavy industries that require high temperatures, heavy-duty transportation, and long-term energy storage. We are now thinking about the role hydrogen can play in decarbonizing these sectors. Hydrogen has a number of properties that make it safer to handle and use than the conventional fuels used in our energy system today. Hydrogen is nontoxic and much lighter than air. In the case of a leak, its lightness allows for relatively rapid dispersal. All fuels have some degree of danger associated with them, but we can design fuel systems with engineering controls and establish standards to ensure their safe handling and use. As the number of successful hydrogen projects grows, the public will become increasingly confident that hydrogen can be as safe as the fuels we use today. To expand hydrogen’s uses, we first need to explore ways of integrating it into as many energy sectors as possible. This presents a challenge because the entry points can vary for different regions. For example, in colder regions like the northeastern U.S., hydrogen can help provide heating. In California, it can be used for energy storage and light-duty transportation. And in the southern U.S., hydrogen can be used in industry as a feedstock or energy source. Once the most strategic entry points for hydrogen are identified for each region, the supporting infrastructure can be built and used for additional purposes. For example, if the northeastern U.S. implements hydrogen as its primary source of residential heating, other uses for hydrogen will follow, such as for transportation or energy storage. At that point, we hope that the market will shift so that it is profitable to use hydrogen across all energy sectors.Q: What challenges need to be overcome so that hydrogen can be used to support decarbonization, and what are some solutions to these challenges? A: The first challenge involves addressing the large capital investment that needs to be made, especially in infrastructure. Once industry and policymakers are convinced that hydrogen will be a critical component for decarbonization, investing in that infrastructure is the next step. Currently, we have many hydrogen plants — we know how to produce hydrogen. But in order to move toward a semi-hydrogen economy, we need to identify the sectors or end users that really require or could benefit from using hydrogen. The way I see it, we need two energy vectors for decarbonization. One is electricity; we are sure about that. But it’s not enough. The second vector can be, and should be, hydrogen. Another key issue is the nature of hydrogen production itself. Though hydrogen does not generate any emissions directly when used, hydrogen production can have a huge environmental impact. Today, close to 95 percent of its production is from fossil resources. As a result, the CO2 emissions from hydrogen production are quite high. There are two ways to move toward cleaner hydrogen production. One is applying carbon capture and storage to the fossil fuel-based hydrogen production processes. In this case, usually a CO2 emissions reduction of around 90 percent is feasible. The second way to produce cleaner hydrogen is by using electricity to produce hydrogen via electrolysis. Here, the source of electricity is very important. Our source of hydrogen needs to produce very low levels of CO2 emissions, if not zero. Otherwise, there will not be any environmental benefit. If we start with clean, low-carbon electricity sources such as renewables, our CO2 emissions will be quite low. Emre Gençer discusses hydrogen at the MIT Energy Initiative’s 2019 Spring Symposium. Photo: Kelley Travers A five-story mixed-use structure in Roxbury represents a new kind of net-zero-energy building, made from wood. Wed, 04 Mar 2020 23:59:59 -0500 David L. Chandler | MIT News Office A new building about to take shape in Boston’s Roxbury area could, its designers hope, herald a new way of building residential structures in cities.Designed by architects from MIT and the design and construction firm Placetailor, the five-story building’s structure will be made from cross-laminated timber (CLT), which eliminates most of the greenhouse-gas emissions associated with standard building materials. It will be assembled on site mostly from factory-built subunits, and it will be so energy-efficient that its net carbon emissions will be essentially zero.Most attempts to quantify a building’s greenhouse gas contributions focus on the building’s operations, especially its heating and cooling systems. But the materials used in a building’s construction, especially steel and concrete, are also major sources of carbon emissions and need to be included in any realistic comparison of different types of construction.Wood construction has tended to be limited to single-family houses or smaller apartment buildings with just a few units, narrowing the impact that it can have in urban areas. But recent developments — involving the production of large-scale wood components, known as mass timber; the use of techniques such as cross-laminated timber; and changes in U.S. building codes — now make it possible to extend wood’s reach into much larger buildings, potentially up to 18 stories high.Several recent buildings in Europe have been pushing these limits, and now a few larger wooden buildings are beginning to take shape in the U.S. as well. The new project in Boston will be one of the largest such residential buildings in the U.S. to date, as well as one of the most innovative, thanks to its construction methods.Described as a Passive House Demonstration Project, the Boston building will consist of 14 residential units of various sizes, along with a ground-floor co-working space for the community. The building was designed by Generate Architecture and Technologies, a startup company out of MIT and Harvard University, headed by John Klein, in partnership with Placetailor, a design, development, and construction company that has specialized in building net-zero-energy and carbon-neutral buildings for more than a decade in the Boston area.Klein, who has been a principal investigator in MIT’s Department of Architecture and now serves as CEO of Generate, says that large buildings made from mass timber and assembled using the kit-of-parts approach he and his colleagues have been developing have a number of potential advantages over conventionally built structures of similar dimensions. For starters, even when factoring in the energy used in felling, transporting, assembling, and finishing the structural lumber pieces, the total carbon emissions produced would be less than half that of a comparable building made with conventional steel or concrete. Klein, along with collaborators from engineering firm BuroHappold Engineering and ecological market development firm Olifant, will be presenting a detailed analysis of these lifecycle emissions comparisons later this year at the annual Passive and Low Energy Architecture (PLEA) conference in A Coruña, Spain, whose theme this year is “planning post-carbon cities.”For that study, Klein and his co-authors modeled nine different versions of an eight-story mass-timber building, along with one steel and one concrete version of the building, all with the same overall scale and specifications. Their analysis showed that materials for the steel-based building produced the most greenhouse emissions; the concrete version produced 8 percent less than that; and one version of the mass-timber building produced 53 percent less.The first question people tend to ask about the idea of building tall structures out of wood is: What about fire? But Klein says this question has been thoroughly studied, and tests have shown that, in fact, a mass-timber building retains its structural strength longer than a comparable steel-framed building. That’s because the large timber elements, typically a foot thick or more, are made by gluing together several layers of conventional dimensioned lumber. These will char on the outside when exposed to fire, but the charred layer actually provides good insulation and protects the wood for an extended period. Steel buildings, by contrast, can collapse suddenly when the temperature of the fire approaches steel’s melting point and causes it to soften.The kit-based approach that Generate and Placetailor have developed, which the team calls Model-C, means that in designing a new building, it’s possible to use a series of preconfigured modules, assembled in different ways, to create a wide variety of structures of different sizes and for different uses, much like assembling a toy structure out of LEGO blocks. These subunits can be built in factories in a standardized process and then trucked to the site and bolted together. This process can reduce the impact of weather by keeping much of the fabrication process indoors in a controlled environment, while minimizing the construction time on site and thus reducing the construction’s impact on the neighborhood. Animation depicts the process of assembling the mass-timber building from a set of factory-built components. Courtesy of Generate Architecture and Technologies“It’s a way to rapidly deploy these kinds of projects through a standardized system,” Klein says. “It’s a way to build rapidly in cities, using an aesthetic that embraces offsite industrial construction.”Because the thick wood structural elements are naturally very good insulators, the Roxbury building’s energy needs for heating and cooling are reduced compared to conventional construction, Klein says. They also produce very good acoustic insulation for its occupants. In addition, the building is designed to have solar panels on its roof, which will help to offset the building’s energy use.The team won a wood innovation grant in 2018 from the U.S. Forest Service, to develop a mass-timber based system for midscale housing developments. The new Boston building will be the first demonstration project for the system they developed.“It’s really a system, not a one-off prototype,” Klein says. With the on-site assembly of factory-built modules, which includes fully assembled bathrooms with the plumbing in place, he says the basic structure of the building can be completed in only about one week per floor.“We’re all aware of the need for an immediate transition to a zero-carbon economy, and the building sector is a prime target,” says Andres Bernal SM ’13, Placetailor’s director of architecture. “As a company that has delivered only zero-carbon buildings for over a decade, we’re very excited to be working with CLT/mass timber as an option for scaling up our approach and sharing the kit-of-parts and lessons learned with the rest of the Boston community.”With U.S. building codes now allowing for mass timber buildings of up to 18 stories, Klein hopes that this building will mark the beginning of a new boom in wood-based or hybrid construction, which he says could help to provide a market for large-scale sustainable forestry, as well as for sustainable, net-zero energy housing.“We see it as very competitive with concrete and steel for buildings of between eight and 12 stories,” he says. Such buildings, he adds, are likely to have great appeal, especially to younger generations, because “sustainability is very important to them. This provides solutions for developers, that have a real market differentiation.”He adds that Boston has set a goal of building thousands of new units of housing, and also a goal of making the city carbon-neutral. “Here’s a solution that does both,” he says.The project team included Evan Smith and Colin Booth at Placetailor Development; in addition to Klein, Zlatan Sehovic, Chris Weaver, John Fechtel, Jaehun Woo, and Clarence Yi-Hsien Lee at Generate Design; Andres Bernal, Michelangelo LaTona, Travis Anderson, and Elizabeth Hauver at Placetailor Design; Laura Jolly and Evan Smith at Placetailor Construction; Paul Richardson and Wolf Mangelsdorf at Burohappold; Sonia Barrantes and Jacob Staub at Ripcord Engineering; and Brian Kuhn and Caitlin Gamache at Code Red. Architect’s rendering shows the new mass-timber residential building that will soon begin construction in Boston’s Roxbury neighborhood. Images: Generate Architecture and Technologies Aerogels for solar devices and windows are more transparent than glass. Tue, 25 Feb 2020 13:10:01 -0500 Nancy W. Stauffer | MIT Energy Initiative In recent decades, the search for high-performance thermal insulation for buildings has prompted manufacturers to turn to aerogels. Invented in the 1930s, these remarkable materials are translucent, ultraporous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow, making them ideal for keeping heat inside on a cold winter day and outside when summer temperatures soar. Five years ago, researchers led by Evelyn Wang, a professor and head of the Department of Mechanical Engineering, and Gang Chen, the Carl Richard Soderberg Professor in Power Engineering, set out to add one more property to that list. They aimed to make a silica aerogel that was truly transparent. “We started out trying to realize an optically transparent, thermally insulating aerogel for solar thermal systems,” says Wang. Incorporated into a solar thermal collector, a slab of aerogel would allow sunshine to come in unimpeded but prevent heat from coming back out — a key problem in today’s systems. And if the transparent aerogel were sufficiently clear, it could be incorporated into windows, where it would act as a good heat barrier but still allow occupants to see out. When the researchers started their work, even the best aerogels weren’t up to those tasks. “People had known for decades that aerogels are a good thermal insulator, but they hadn’t been able to make them very optically transparent,” says Lin Zhao PhD ’19 of mechanical engineering. “So in our work, we’ve been trying to understand exactly why they’re not very transparent, and then how we can improve their transparency.” Aerogels: opportunities and challenges The remarkable properties of a silica aerogel are the result of its nanoscale structure. To visualize that structure, think of holding a pile of small, clear particles in your hand. Imagine that the particles touch one another and slightly stick together, leaving gaps between them that are filled with air. Similarly, in a silica aerogel, clear, loosely connected, nanoscale silica particles form a three-dimensional solid network within an overall structure that is mostly air. Because of all that air, a silica aerogel has an extremely low density — in fact, one of the lowest densities of any known bulk material — yet it’s solid and structurally strong, though brittle. If a silica aerogel is made of transparent particles and air, why isn’t it transparent? Because the light that enters doesn’t all pass straight through. It is diverted whenever it encounters an interface between a solid particle and the air surrounding it. Figure 1 in the slideshow above illustrates the process. When light enters the aerogel, some is absorbed inside it. Some — called direct transmittance — travels straight through. And some is redirected along the way by those interfaces. It can be scattered many times and in any direction, ultimately exiting the aerogel at an angle. If it exits from the surface through which it entered, it is called diffuse reflectance; if it exits from the other side, it is called diffuse transmittance. To make an aerogel for a solar thermal system, the researchers needed to maximize the total transmittance: the direct plus the diffuse components. And to make an aerogel for a window, they needed to maximize the total transmittance and simultaneously minimize the fraction of the total that is diffuse light. “Minimizing the diffuse light is critical because it’ll make the window look cloudy,” says Zhao. “Our eyes are very sensitive to any imperfection in a transparent material.” Developing a model The sizes of the nanoparticles and the pores between them have a direct impact on the fate of light passing through an aerogel. But figuring out that interaction by trial and error would require synthesizing and characterizing too many samples to be practical. “People haven’t been able to systematically understand the relationship between the structure and the performance,” says Zhao. “So we needed to develop a model that would connect the two.” To begin, Zhao turned to the radiative transport equation, which describes mathematically how the propagation of light (radiation) through a medium is affected by absorption and scattering. It is generally used for calculating the transfer of light through the atmospheres of Earth and other planets. As far as Wang knows, it has not been fully explored for the aerogel problem. Both scattering and absorption can reduce the amount of light transmitted through an aerogel, and light can be scattered multiple times. To account for those effects, the model decouples the two phenomena and quantifies them separately — and for each wavelength of light. Based on the sizes of the silica particles and the density of the sample (an indicator of total pore volume), the model calculates light intensity within an aerogel layer by determining its absorption and scattering behavior using predictions from electromagnetic theory. Using those results, it calculates how much of the incoming light passes directly through the sample and how much of it is scattered along the way and comes out diffuse. The next task was to validate the model by comparing its theoretical predictions with experimental results. Synthesizing aerogels Working in parallel, graduate student Elise Strobach of mechanical engineering had been learning how best to synthe­size aerogel samples — both to guide development of the model and ultimately to validate it. In the process, she produced new insights on how to synthesize an aerogel with a specific desired structure. Her procedure starts with a common form of silicon called silane, which chemically reacts with water to form an aerogel. During that reaction, tiny nucleation sites occur where particles begin to form. How fast they build up determines the end structure. To control the reaction, she adds a catalyst, ammonia. By carefully selecting the ammonia-to-silane ratio, she gets the silica particles to grow quickly at first and then abruptly stop growing when the precursor materials are gone — a means of producing particles that are small and uniform. She also adds a solvent, methanol, to dilute the mixture and control the density of the nucleation sites, thus the pores between the particles. The reaction between the silane and water forms a gel containing a solid nanostructure with interior pores filled with the solvent. To dry the wet gel, Strobach needs to get the solvent out of the pores and replace it with air — without crushing the delicate structure. She puts the aerogel into the pressure chamber of a critical point dryer and floods liquid CO2 into the chamber. The liquid CO2 flushes out the solvent and takes its place inside the pores. She then slowly raises the temperature and pressure inside the chamber until the liquid CO2 transforms to its supercritical state, where the liquid and gas phases can no longer be differentiated. Slowly venting the chamber releases the CO2 and leaves the aerogel behind, now filled with air. She then subjects the sample to 24 hours of annealing — a standard heat-treatment process — which slightly reduces scatter without sacrificing the strong thermal insulating behavior. Even with the 24 hours of annealing, her novel procedure shortens the required aerogel synthesis time from several weeks to less than four days. Validating and using the model To validate the model, Strobach fabricated samples with carefully controlled thicknesses, densities, and pore and particle sizes — as determined by small-angle X-ray scattering — and used a standard spectrophotometer to measure the total and diffuse transmittance. The data confirmed that, based on measured physical properties of an aerogel sample, the model could calculate total transmittance of light as well as a measure of clarity called haze, defined as the fraction of total transmittance that is made up of diffuse light. The exercise confirmed simplifying assumptions made by Zhao in developing the model. Also, it showed that the radiative properties are independent of sample geometry, so his model can simulate light transport in aerogels of any shape. And it can be applied not just to aerogels, but to any porous materials. Wang notes what she considers the most important insight from the modeling and experimental results: “Overall, we determined that the key to getting high transparency and minimal haze — without reducing thermal insulating capability — is to have particles and pores that are really small and uniform in size,” she says. One analysis demonstrates the change in behavior that can come with a small change in particle size. Many applications call for using a thicker piece of transparent aerogel to better block heat transfer. But increasing thickness may decrease transparency. With their samples, as long as particle size is small, increasing thickness to achieve greater thermal insulation will not significantly decrease total transmittance or increase haze. Comparing aerogels from MIT and elsewhere How much difference does their approach make? “Our aerogels are more transparent than glass because they don’t reflect — they don’t have that glare spot where the glass catches the light and reflects to you,” says Strobach. To Lin, a main contribution of their work is the development of general guidelines for material design, as demonstrated by Figure 4 in the slideshow above. Aided by such a “design map,” users can tailor an aerogel for a particular application. Based on the contour plots, they can determine the combinations of controllable aerogel properties — namely, density and particle size — needed to achieve a targeted haze and transmittance outcome for many applications. Aerogels in solar thermal collectors The researchers have already demonstrated the value of their new aerogels for solar thermal energy conversion systems, which convert sunlight into thermal energy by absorbing radiation and transforming it into heat. Current solar thermal systems can produce thermal energy at so-called intermediate temperatures — between 120 and 220 degrees Celsius — which can be used for water and space heating, steam generation, industrial processes, and more. Indeed, in 2016, U.S. consumption of thermal energy exceeded the total electricity generation from all renewable sources. However, state-of-the-art solar thermal systems rely on expensive optical systems to concentrate the incoming sunlight, specially designed surfaces to absorb radiation and retain heat, and costly and difficult-to-maintain vacuum enclosures to keep that heat from escaping. To date, the costs of those components have limited market adoption. Zhao and his colleagues thought that using a transparent aerogel layer might solve those problems. Placed above the absorber, it could let through incident solar radiation and then prevent the heat from escaping. So it would essentially replicate the natural greenhouse effect that’s causing global warming — but to an extreme degree, on a small scale, and with a positive outcome. To try it out, the researchers designed an aerogel-based solar thermal receiver. The device consists of a nearly “blackbody” absorber (a thin copper sheet coated with black paint that absorbs all radiant energy that falls on it), and above it a stack of optimized, low-scattering silica aerogel blocks, which efficiently transmit sunlight and suppress conduction, convection, and radiation heat losses simultaneously. The nanostructure of the aerogel is tailored to maximize its optical trans­parency while maintaining its ultralow thermal conductivity. With the aerogel present, there is no need for expensive optics, surfaces, or vacuum enclosures. After extensive laboratory tests of the device, the researchers decided to test it “in the field” — in this case, on the roof of an MIT building. On a sunny day in winter, they set up their device, fixing the receiver toward the south and tilted 60 degrees from horizontal to maximize solar exposure. They then monitored its performance between 11 a.m. and 1 p.m. Despite the cold ambient temperature (less than 1 C) and the presence of clouds in the afternoon, the temperature of the absorber started increasing right away and eventually stabilized above 220 C. To Zhao, the performance already demonstrated by the artificial greenhouse effect opens up what he calls “an exciting pathway to the promotion of solar thermal energy utilization.” Already, he and his colleagues have demonstrated that it can convert water to steam that is greater than 120 C. In collaboration with researchers at the Indian Institute of Technology Bombay, they are now exploring possible process steam applications in India and performing field tests of a low-cost, completely passive solar autoclave for sterilizing medical equipment in rural communities. Windows and more Strobach has been pursuing another promising application for the transparent aerogel — in windows. “In trying to make more transparent aerogels, we hit a regime in our fabrication process where we could make things smaller, but it didn’t result in a significant change in the transparency,” she says. “But it did make a significant change in the clarity,” a key feature for a window. The availability of an affordable, thermally insulating window would have several impacts, says Strobach. Every winter, windows in the United States lose enough energy to power over 50 million homes. That wasted energy costs the economy more than $32 billion a year and generates about 350 million tons of CO2 — more than is emitted by 76 million cars. Consumers can choose high-efficiency triple-pane windows, but they’re so expensive that they’re not widely used. Analyses by Strobach and her colleagues showed that replacing the air gap in a conventional double-pane window with an aerogel pane could be the answer. The result could be a double-pane window that is 40 percent more insulating than traditional ones and 85 percent as insulating as today’s triple-pane windows — at less than half the price. Better still, the technology could be adopted quickly. The aerogel pane is designed to fit within the current two-pane manufacturing process that’s ubiquitous across the industry, so it could be manufactured at low cost on existing production lines with only minor changes. Guided by Zhao’s model, the researchers are continuing to improve the performance of their aerogels, with a special focus on increasing clarity while maintaining transparency and thermal insulation. In addition, they are considering other traditional low-cost systems that would — like the solar thermal and window technologies — benefit from sliding in an optimized aerogel to create a high-performance heat barrier that lets in abundant sunlight. This research was supported by the Full-Spectrum Optimized Conversion and Utilization of Sunlight program of the U.S. Department of Energy’s Advanced Research Projects Agency–Energy; the Solid-State Solar Thermal Energy Conversion Center, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences; and the MIT Tata Center for Technology and Design. Elise Strobach received funding from the National Science Foundation Graduate Research Fellowship Program. Lin Zhao PhD ’19 is now an optics design engineer at 3M in St. Paul, Minnesota.  This article appears in the Autumn 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative.  MIT Professor Evelyn Wang (right), graduate student Elise Strobach (left), and their colleagues have been performing theoretical and experimental studies of low-cost silica aerogels optimized to serve as a transparent heat barrier in specific devices. Photo: Stuart Darsch Most materials have a fixed ability to conduct heat, but applying voltage to this thin film changes its thermal properties drastically. Mon, 24 Feb 2020 14:50:23 -0500 David Chandler | MIT News Office Materials whose electronic and magnetic properties can be significantly changed by applying electrical inputs form the backbone of all of modern electronics. But achieving the same kind of tunable control over the thermal conductivity of any material has been an elusive quest.Now, a team of researchers at MIT have made a major leap forward. They have designed a long-sought device, which they refer to as an “electrical heat valve,” that can vary the thermal conductivity on demand. They demonstrated that the material’s ability to conduct heat can be “tuned” by a factor of 10 at room temperature.This technique could potentially open the door to new technologies for controllable insulation in smart windows, smart walls, smart clothing, or even new ways of harvesting the energy of waste heat. The findings are reported today in the journal Nature Materials, in a paper by MIT professors Bilge Yildiz and Gang Chen, recent graduates Qiyang Lu PhD ’18 and Samuel Huberman PhD ’18, and six others at MIT and at Brookhaven National Laboratory.Thermal conductivity describes how well heat can transfer through a material. For example, it’s the reason you can easily pick up a hot frying pan with a wooden handle, because of wood’s low thermal conductivity, but you might get burned picking up a similar frying pan with a metal handle, which has high thermal conductivity.The researchers used a material called strontium cobalt oxide (SCO), which can be made in the form of thin films. By adding oxygen to SCO in a crystalline form called brownmillerite, thermal conductivity increased. Adding hydrogen to it caused conductivity to decrease.The process of adding or removing oxygen and hydrogen can be controlled simply by varying a voltage applied to the material. In essence, the process is electrochemically driven. Overall, at room temperature, the researchers found this process provided a tenfold variation in the material’s heat conduction. Such an order-of-magnitude range of electrically controllable variation has never been seen in any material before, the researchers say.In most known materials, thermal conductivity is invariable — wood never conducts heat well, and metals never conduct heat poorly. As such, when the researchers found that adding certain atoms into the molecular structure of a material could actually increase its thermal conductivity, it was an unexpected result. If anything, adding the extra atoms — or, more specifically, ions, atoms stripped of some electrons, or with excess electrons, to give them a net charge — should make conductivity worse (which, it turned out, was the case when adding hydrogen, but not oxygen).“It was a surprise to me when I saw the result,” Chen says. But after further studies of the system, he says, “now we have a better understanding” of why this unexpected phenomenon happens.It turns out that inserting oxygen ions into the structure of the brownmillerite SCO transforms it into what’s known as a perovskite structure — one that has an even more highly ordered structure than the original. “It goes from a low-symmetry structure to a high-symmetry one. It also reduces the amount of so-called oxygen vacancy defect sites. These together lead to its higher heat conduction,” Yildiz says.Heat is conducted readily through such highly ordered structures, while it tends to be scattered and dissipated by highly irregular atomic structures. Introducing hydrogen ions, by contrast, causes a more disordered structure.“We can introduce more order, which increases thermal conductivity, or we can introduce more disorder, which gives rise to lower conductivity. We could figure this out by performing computational modeling, in addition to our experiments,” Yildiz explains.While the thermal conductivity can be varied by about a factor of 10 at room temperature, at lower temperatures the variation is even greater, she adds.The new method makes it possible to continuously vary that degree of order, in both directions, simply by varying a voltage applied to the thin-film material. The material is either immersed in an ionic liquid (essentially a liquid salt) or in contact with a solid electrolyte, that supplies either negative oxygen ions or positive hydrogen ions (protons) into the material when the voltage is turned on. In the liquid electrolyte case, the source of oxygen and hydrogen is hydrolysis of water from the surrounding air.“What we have shown here is really a demonstration of the concept,” Yildiz explains. The fact that they require the use of a liquid electrolyte medium for the full range of hydrogenation and oxygenation makes this version of the system “not easily applicable to an all-solid-state device,” which would be the ultimate goal, she says. Further research will be needed to produce a more practical version. “We know there are solid-state electrolyte materials” that could theoretically be substituted for the liquids, she says. The team is continuing to explore these possibilities, and have demonstrated working devices with solid electrolytes as well.Chen says “there are many applications where you want to regulate heat flow.” For example, for energy storage in the form of heat, such as from a solar-thermal installation, it would be useful to have a container that could be highly insulating to retain the heat until it’s needed, but which then could be switched to be highly conductive when it comes time to retrieve that heat. “The holy grail would be something we could use for energy storage,” he says. “That’s the dream, but we’re not there yet.”But this finding is so new that there may also be a variety of other potential uses. This approach, Yildiz says, “could open up new applications we didn’t think of before.” And while the work was initially confined to the SCO material, “the concept is applicable to other materials, because we know we can oxygenate or hydrogenate a range of materials electrically, electrochemically” she says. In addition, although this research focused on changing the thermal properties, the same process actually has other effects as well, Chen says: “It not only changes thermal conductivity, but it also changes optical properties.”“This is a truly innovative and novel way for using ion insertion and extraction in solids to tune or switch thermal conductivity,” says Juergen Fleig, a professor of chemical technology and analytics at the University of Vienna, Austria, who was not involved in this work. “The measured effects (caused by two phase transitions) are not only quite large but also bi-directional, which is exciting. I’m also impressed that the processes work so well at room temperature, since such oxide materials are usually operated at much higher temperatures.”Yongjie Hu, an associate professor of mechanical and aerospace engineering at the University of California at Los Angeles, who also was not involved in this work, says “Active control over thermal transport is fundamentally challenging. This is a very exciting study and represents an important step to achieve the goal. It is the first report that has looked in detail at the structures and thermal properties of tri-state phases, and may open up new venues for thermal management and energy applications.”The research team also included Hantao Zhang, Qichen Song, Jayue Wang and Gulin Vardar at MIT, and Adrian Hunt and Iradwikanari Waluyo at Brookhaven National Laboratory in Upton, New York. The work was supported by the National Science Foundation and the U.S. Department of Energy. Researchers found that strontium cobalt oxide (SCO) naturally occurs in an atomic configuration called brownmillerite (center), but when oxygen ions are added to it (right), it becomes more orderly and more heat conductive, and when hydrogen ions are added (left) it becomes less orderly and less heat conductive. Image: courtesy of the researchers Investments in energy efficiency projects, sustainable design elements essential as campus transforms. Fri, 21 Feb 2020 14:20:01 -0500 Nicole Morell | Office of Sustainability At MIT, making a better world often starts on campus. That’s why, as the Institute works to find solutions to complex global problems, MIT has taken important steps to grow and transform its physical campus: adding new capacity, capabilities, and facilities to better support student life, education, and research. But growing and transforming the campus relies on resource and energy use — use that can exacerbate the complex global problem of climate change. This raises the question: How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?It’s a question — and a challenge — that MIT is committed to tackling.Tracking toward 2030 goals Guided by the 2015 Plan for Action on Climate Change, MIT continues to work toward a goal of a minimum of 32 percent reduction in campus greenhouse gas emissions by 2030. As reported in the MIT Office of Sustainability’s (MITOS) climate action plan update, campus greenhouse gas (GHG) emissions rose by 2 percent in 2019, in part due to a longer cooling season as well as the new MIT.nano facility coming fully online. Despite this, overall net emissions are 18 percent below the 2014 baseline, and MIT continues to track toward its 2030 goal.Joe Higgins, vice president for campus services and stewardship, is optimistic about MIT’s ability to not only meet, but exceed this current goal. “With this growth [to campus], we are discovering unparalleled opportunities to work toward carbon neutrality by collaborating with key stakeholders across the Institute, tapping into the creativity of our faculty, students, and researchers, and partnering with industry experts. We are committed to making steady progress toward achieving our GHG reduction goal,” he says.New growth to campus This past year marked the first full year of operation for the new MIT.nano facility. This facility includes many energy-intensive labs that necessitate high ventilation rates to meet the requirements of a nano technology clean room fabrication laboratory. As a result, the facility’s energy demands and GHG emissions can be much higher than a traditional science building. In addition, this facility — among others — uses specialty research gases that can act as potent greenhouse gases. Still, the 214,000-square-foot building has a number of sustainable, high-energy-efficiency design features, including an innovative air filtering process to support clean room standards while minimizing energy use. For these sustainable design elements, the facility was recognized with an International Institute for Sustainable Laboratories (I2SL) 2019 Go Beyond Award.In 2020, MIT.nano will be joined by new residential and multi-use buildings in both West Campus and Kendall Square, with the Vassar Street Residence and Kendall Square Sites 4 and 5 set to be completed. In keeping with MIT’s target for LEED v4 Gold Certification for new projects, these buildings were designed for high energy efficiency to minimize emissions and include a number of other sustainability measures, from green roofs to high-performance building envelopes. With new construction on campus, integrated design processes allow for sustainability and energy efficiency strategies to be adopted at the outset.Energy efficiency on an established campus For years, MIT has been keenly focused on increasing the energy efficiency and reducing emissions of its existing buildings, but as the campus grows, reducing emissions of current buildings through deep energy enhancements is an increasingly important part of offsetting emissions from new growth.To best accomplish this, the Department of Facilities — in close collaboration with the Office of Sustainability — has developed and rolled out a governance structure that relies on cross-functional teams to create new standards and policies, identify opportunities, develop projects, and assess progress relevant to building efficiency and emissions reduction. “Engaging across campus and across departments is essential to building out MIT’s full capacity to advance emissions reductions,” explains Director of Sustainability Julie Newman.These cross-functional teams — which include Campus Construction; Campus Services and Maintenance; Environment, Health, and Safety; Facilities Engineering; the Office of Sustainability; and Utilities — have focused on a number of strategies in the past year, including both building-wide and targeted energy strategies that have revealed priority candidates for energy retrofits to drive efficiency and minimize emissions.Carlo Fanone, director of facilities engineering, explains that “the cross-functional teams play an especially critical role at MIT, since we are a district energy campus. We supply most of our own energy, we distribute it, and we are the end users, so the teams represent a holistic approach that looks at all three of these elements equally — supply, distribution, and end-use — and considers energy solutions that address any or all of these elements.” Fanone notes that MIT has also identified 25 facilities on campus that have a high energy-use intensity and a high greenhouse gas emissions footprint. These 25 buildings account for up to 50 percent of energy consumption on the MIT campus. “Going forward,” Fanone says, “we are focusing our energy work on these buildings and on other energy enhancements that could have a measurable impact on the progress toward MIT’s 2030 goal.”Armed with these data, the Department of Facilities last year led retrofits for smart lighting and mechanical systems upgrades, as well as smart building management systems, in a number of buildings across campus. These building audits will continue to guide future projects focused on improving and optimizing energy elements such as heat recovery, lighting, and building systems controls.In addition to building-level efficiency improvements, MIT’s Central Utilities Plant upgrade is expected to contribute significantly to the reduction of on-campus emissions in upcoming years. The upgraded plant — set to be completed this year — will incorporate more efficient equipment and state-of-the-art controls. Between this upgrade, a fuel switch improvement made in 2015, and the building-level energy improvements, regulated pollutant emissions on campus are expected to reduce by more than 25 percent and campus greenhouse gas emissions by 10 percent from 2014 levels, helping to offset a projected 10 percent increase in greenhouse gas emissions due to energy demands created by new growth.Climate research and action on campus As MIT explores energy efficiency opportunities, the campus itself plays an important role as an incubator for new ideas.In 2019, MITOS director Julie Newman and professor of mechanical engineering Timothy Gutowski are once again teaching 11.S938 / 2.S999 (Solving for Carbon Neutrality at MIT) this semester. “The course, along with others that have emerged across campus, provides students the opportunity to devise ideas and solutions for real-world challenges while connecting them back to campus. It also gives the students a sense of ownership on this campus, sharing ideas to chart the course for carbon-neutral MIT,” Newman says.Also on campus, a new energy storage project is being developed to test the feasibility and scalability of using different battery storage technologies to redistribute electricity provided by variable renewable energy. Funded by a Campus Sustainability Incubator Fund grant and led by Jessika Trancik, associate professor in the Institute for Data, Systems, and Society, the project aims to test software approaches to synchronizing energy demand and supply and evaluate the performance of different energy-storage technologies against these use cases. It has the benefit of connecting on-campus climate research with climate action. “Building this storage testbed, and testing technologies under real-world conditions, can inform new algorithms and battery technologies and act as a multiplier, so that the lessons we learn at MIT can be applied far beyond campus,” says Trancik of the project.Supporting on-campus efforts MIT’s work toward emissions reductions already extends beyond campus as the Institute continues to benefit from the Institute’s 25-year commitment to purchase electricity generated through its Summit Farms Power Purchase Agreement (PPA), which enabled the construction of a 650-acre, 60-megawatt solar farm in North Carolina. Through the purchase of 87,300 megawatt-hours of solar power, MIT was able to offset over 30,000 metric tons of greenhouse gas emissions from our on-campus operations in 2019.The Summit Farms PPA model has provided inspiration for similar projects around the country and has also demonstrated what MIT can accomplish through partnership. MIT continues to explore the possibility of collaborating on similar large power-purchase agreements, possibly involving other local institutions and city governments.Looking ahead As the campus continues to work toward reducing emissions, Fanone notes that a comprehensive approach will help MIT address the challenge of growing a campus while reducing emissions. “District-level energy solutions, additional renewables, coupled with energy enhancements within our buildings, will allow MIT to offset growth and meet our 2030 GHG goals,” says Fanone. Adds Newman, “It’s an exciting time that MIT is now positioned to put the steps in place to respond to this global crisis at the local level.” How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change? Photo: Maia Weinstock By organizing performance data and predicting problems, Tagup helps energy companies keep their equipment running. Wed, 12 Feb 2020 09:39:37 -0500 Zach Winn | MIT News Office Most people only think about the systems that power their cities when something goes wrong. Unfortunately, many people in the San Francisco Bay Area had a lot to think about recently when their utility company began scheduled power outages in an attempt to prevent wildfires. The decision came after devastating fires last year were found to be the result of faulty equipment, including transformers.Transformers are the links between power plants, power transmission lines, and distribution networks. If something goes wrong with a transformer, entire power plants can go dark. To fix the problem, operators work around the clock to assess various components of the plant, consider disparate data sources, and decide what needs to be repaired or replaced.Power equipment maintenance and failure is such a far-reaching problem it’s difficult to attach a dollar sign to. Beyond the lost revenue of the plant, there are businesses that can’t operate, people stuck in elevators and subways, and schools that can’t open.Now the startup Tagup is working to modernize the maintenance of transformers and other industrial equipment. The company’s platform lets operators view all of their data streams in one place and use machine learning to estimate if and when components will fail.Founded by CEO Jon Garrity ’11 and CTO Will Vega-Brown ’11, SM ’13 — who recently completed his PhD program in MIT’s Department of Mechanical Engineering and will be graduating this month — Tagup is currently being used by energy companies to monitor approximately 60,000 pieces of equipment around North America and Europe. That includes transformers, offshore wind turbines, and reverse osmosis systems for water filtration, among other things.“Our mission is to use AI to make the machines that power the world safer, more reliable, and more efficient,” Garrity says.A light bulb goes onVega-Brown and Garrity crossed paths in a number of ways at MIT over the years. As undergraduates, they took a few of the same courses, with Vega-Brown double majoring in mechanical engineering and physics and Garrity double majoring in economics and physics. They were also fraternity brothers as well as teammates on the football team.Garrity was first exposed to entrepreneurship as an undergraduate in MIT’s Energy Ventures class and in the Martin Trust Center for Entrepreneurship. Later, when Garrity returned to campus while attending Harvard Business School and Vega-Brown was pursuing his doctorate, they were again classmates in MIT’s New Enterprises course.Still, the founders didn’t think about starting a company until 2015, after Garrity had worked at GE Energy and Vega-Brown was well into his PhD work at MIT’s Computer Science and Artificial Intelligence Laboratory.At GE, Garrity discovered an intriguing business model through which critical assets like jet engines were leased by customers — in this case airlines — rather than purchased, and manufacturers held responsibility for remotely monitoring and maintaining them. The arrangement allowed GE and others to leverage their engineering expertise while the customers focused on their own industries.”When I worked at GE, I always wondered: Why isn’t this service available for any equipment type? The answer is economics.” Garrity says. “It is expensive to set up a remote monitoring center, to instrument the equipment in the field, to staff the 50 or more engineering subject matter experts, and to provide the support required to end customers. The cost of equipment failure, both in terms of business interruption and equipment breakdown, must be enormous to justify the high average fixed cost.”“We realized two things,” Garrity continues. “With the increasing availability of sensors and cloud infrastructure, we can dramatically reduce the cost [of monitoring critical assets] from the infrastructure and communications side. And, with new machine-learning methods, we can increase the productivity of engineers who review equipment data manually.”That realization led to Tagup, though it would take time to prove the founders’ technology. “The problem with using AI for industrial applications is the lack of high-quality data,” Vega-Brown explains. “Many of our customers have giant datasets, but the information density in industrial data is often quite low. That means we need to be very careful in how we hunt for signal and validate our models, so that we can reliably make accurate forecasts and predictions.”The founders leveraged their MIT ties to get the company off the ground. They received guidance from MIT’s Venture Mentoring Service, and Tagup was in the first cohort of startups accepted into the MIT Industrial Liaison Program’s (ILP) STEX 25 accelerator, which connects high potential startups with members of industry. Tagup has since secured several customers through ILP, and those early partnerships helped the company train and validate some of its machine-learning models.Making power more reliableTagup’s platform combines all of a customer’s equipment data into one sortable master list that displays the likelihood of each asset causing a disruption. Users can click on specific assets to see charts of historic data and trends that feed into Tagup’s models.The company doesn’t deploy any sensors of its own. Instead, it combines customers’ real-time sensor measurements with other data sources like maintenance records and machine parameters to improve its proprietary machine-learning models.The founders also began with a focused approach to building their system. Transformers were one of the first types of equipment they worked with, and they’ve expanded to other groups of assets gradually.Tagup’s first deployment was in August of 2016 with a power plant that faces the Charles River close to MIT’s campus. Just a few months after it was installed, Garrity was at a meeting overseas when he got a call from the plant manager about a transformer that had just gone offline unexpectedly. From his phone, Garrity was able to inspect real-time data from the transformer and give the manager the information he needed to restart the system. Garrity says it saved the plant about 26 hours of downtime and $150,000 in revenue.“These are really catastrophic events in terms of business outcomes,” Garrity says, noting transformer failures are estimated to cost $23 billion annually.Since then they’ve secured partnerships with several large utility companies, including National Grid and Consolidated Edison Company of New York.Down the line, Garrity and Vega-Brown are excited about using machine learning to control the operation of equipment. For example, a machine could manage itself in the same way an autonomous car can sense an obstacle and steer around it.Those capabilities have major implications for the systems that ensure the lights go on when we flip switches at night.“Where it gets really exciting is moving toward optimization,” Garrity says. Vega-Brown agrees, adding, “Enormous amounts of power and water are wasted because there aren’t enough experts to tune the controllers on every industrial machine in the world. If we can use AI to capture some of the expert knowledge in an algorithm, we can cut inefficiency and improve safety at scale.” Tagup’s industrial equipment monitoring platform is currently being used by energy companies to monitor approximately 60,000 pieces of equipment around North America and Europe. That includes transformers, offshore wind turbines, and reverse osmosis systems for water filtration. Starting with higher-value niche markets and then expanding could help perovskite-based solar panels become competitive with silicon. Thu, 06 Feb 2020 10:57:11 -0500 David L. Chandler | MIT News Office Materials called perovskites show strong potential for a new generation of solar cells, but they’ve had trouble gaining traction in a market dominated by silicon-based solar cells. Now, a study by researchers at MIT and elsewhere outlines a roadmap for how this promising technology could move from the laboratory to a significant place in the global solar market.The “technoeconomic” analysis shows that by starting with higher-value niche markets and gradually expanding, solar panel manufacturers could avoid the very steep initial capital costs that would be required to make perovskite-based panels directly competitive with silicon for large utility-scale installations at the outset. Rather than making a prohibitively expensive initial investment, of hundreds of millions or even billions of dollars, to build a plant for utility-scale production, the team found that starting with more specialized applications could be accomplished for more realistic initial capital investment on the order of $40 million.The results are described in a paper in the journal Joule by MIT postdoc Ian Mathews, research scientist Marius Peters, professor of mechanical engineering Tonio Buonassisi, and five others at MIT, Wellesley College, and Swift Solar Inc.Solar cells based on perovskites — a broad category of compounds characterized by a certain arrangement of their molecular structure — could provide dramatic improvements in solar installations. Their constituent materials are inexpensive, and they could be manufactured in a roll-to-roll process like printing a newspaper, and printed onto lightweight and flexible backing material. This could greatly reduce costs associated with transportation and installation, although they still require further work to improve their durability. Other promising new solar cell materials are also under development in labs around the world, but none has yet made inroads in the marketplace.“There have been a lot of new solar cell materials and companies launched over the years,” says Mathews, “and yet, despite that, silicon remains the dominant material in the industry and has been for decades.”Why is that the case? “People have always said that one of the things that holds new technologies back is that the expense of constructing large factories to actually produce these systems at scale is just too much,” he says. “It’s difficult for a startup to cross what’s called ‘the valley of death,’ to raise the tens of millions of dollars required to get to the scale where this technology might be profitable in the wider solar energy industry.”But there are a variety of more specialized solar cell applications where the special qualities of perovskite-based solar cells, such as their light weight, flexibility, and potential for transparency, would provide a significant advantage, Mathews says. By focusing on these markets initially, a startup solar company could build up to scale gradually, leveraging the profits from the premium products to expand its production capabilities over time.Describing the literature on perovskite-based solar cells being developed in various labs, he says, “They’re claiming very low costs. But they’re claiming it once your factory reaches a certain scale. And I thought, we’ve seen this before — people claim a new photovoltaic material is going to be cheaper than all the rest and better than all the rest. That’s great, except we need to have a plan as to how we actually get the material and the technology to scale.”As a starting point, he says, “We took the approach that I haven’t really seen anyone else take: Let’s actually model the cost to manufacture these modules as a function of scale. So if you just have 10 people in a small factory, how much do you need to sell your solar panels at in order to be profitable? And once you reach scale, how cheap will your product become?”The analysis confirmed that trying to leap directly into the marketplace for rooftop solar or utility-scale solar installations would require very large upfront capital investment, he says. But “we looked at the prices people might get in the internet of things, or the market in building-integrated photovoltaics. People usually pay a higher price in these markets because they’re more of a specialized product. They’ll pay a little more if your product is flexible or if the module fits into a building envelope.” Other potential niche markets include self-powered microelectronics devices.Such applications would make the entry into the market feasible without needing massive capital investments. “If you do that, the amount you need to invest in your company is much, much less, on the order of a few million dollars instead of tens or hundreds of millions of dollars, and that allows you to more quickly develop a profitable company,” he says.“It’s a way for them to prove their technology, both technically and by actually building and selling a product and making sure it survives in the field,” Mathews says, “and also, just to prove that you can manufacture at a certain price point.”Already, there are a handful of startup companies working to try to bring perovskite solar cells to market, he points out, although none of them yet has an actual product for sale. The companies have taken different approaches, and some seem to be embarking on the kind of step-by-step growth approach outlined by this research, he says. “Probably the company that’s raised the most money is a company called Oxford PV, and they’re looking at tandem cells,” which incorporate both silicon and perovskite cells to improve overall efficiency. Another company is one started by Joel Jean PhD ’17 (who is also a co-author of this paper) and others, called Swift Solar, which is working on flexible perovskites. And there’s a company called Saule Technologies, working on printable perovskites.Mathews says the kind of technoeconomic analysis the team used in its study could be applied to a wide variety of other new energy-related technologies, including rechargeable batteries and other storage systems, or other types of new solar cell materials.“There are many scientific papers and academic studies that look at how much it will cost to manufacture a technology once it’s at scale,” he says. “But very few people actually look at how much does it cost at very small scale, and what are the factors affecting economies of scale? And I think that can be done for many technologies, and it would help us accelerate how we get innovations from lab to market.”The research team also included MIT alumni Sarah Sofia PhD ’19 and Sin Cheng Siah PhD ’15, Wellesley College student Erica Ma, and former MIT postdoc Hannu Laine. The work was supported by the European Union’s Horizon 2020 research and innovation program, the Martin Family Society for Fellows of Sustainability, the U.S. Department of Energy, Shell, through the MIT Energy Initiative, and the Singapore-MIT Alliance for Research and Technology. Perovskites, a family of materials defined by a particular kind of molecular structure as illustrated here, have great potential for new kinds of solar cells. A new study from MIT shows how these materials could gain a foothold in the solar marketplace. Image: Christine Daniloff, MIT Researchers are devising new methods of synthesizing chemicals used in goods from clothing, detergents, and antifreeze to pharmaceuticals and plastics. Wed, 05 Feb 2020 13:40:01 -0500 Nancy W. Stauffer | MIT Energy Initiative Most efforts to reduce energy consumption and carbon emissions have focused on the transportation and residential sectors. Little attention has been paid to industrial manufacturing, even though it consumes more energy than either of those sectors and emits high levels of CO2 in the process. To help address that situation, Assistant Professor Karthish Manthiram, postdoc Kyoungsuk Jin, graduate students Joseph H. Maalouf and Minju Chung, and their colleagues, all of the MIT Department of Chemical Engineering, have been devising new methods of synthesizing epoxides, a group of chemicals used in the manufacture of consumer goods ranging from polyester clothing, detergents, and antifreeze to pharmaceuticals and plastics. “We don’t think about the embedded energy and carbon dioxide footprint of a plastic bottle we’re using or the clothing we’re putting on,” says Manthiram. “But epoxides are everywhere!” As solar and wind and storage technologies mature, it’s time to address what Manthiram calls the “hidden energy and carbon footprints of materials made from epoxides.” And the key, he argues, may be to perform epoxide synthesis using electricity from renewable sources along with specially designed catalysts and an unlikely starting material: water. The challenge Epoxides can be made from a variety of carbon-containing compounds known generically as olefins. But regardless of the olefin used, the conversion process generally produces high levels of CO2 or has other serious drawbacks. To illustrate the problem, Manthiram describes processes now used to manufacture ethylene oxide, an epoxide used in making detergents, thickeners, solvents, plastics, and other consumer goods. Demand for ethylene oxide is so high that it has the fifth-largest CO2 footprint of any chemical made today. The top panel of Figure 1 in the slideshow above illustrates one common synthesis process. The recipe is simple: Combine ethylene molecules and oxygen molecules, subject the mixture to high temperatures and pressures, and separate out the ethylene oxide that forms. However, those ethylene oxide molecules are accompanied by molecules of CO2 — a problem, given the volume of ethylene oxide produced nationwide. In addition, the high temperatures and pressures required are generally produced by burning fossil fuels. And the conditions are so extreme that the reaction must take place in a massive pressure vessel. The capital investment required is high, so epoxides are generally produced in a central location and then transported long distances to the point of consumption. Another widely synthesized epoxide is propylene oxide, which is used in making a variety of products, including perfumes, plasticizers, detergents, and polyurethanes. In this case, the olefin — propylene — is combined with tert-butyl hydroperoxide, as illustrated in the bottom panel of Figure 1. An oxygen atom moves from the tert-butyl hydroperoxide molecule to the propylene to form the desired propylene oxide. The reaction conditions are somewhat less harsh than in ethylene oxide synthesis, but a side product must be dealt with. And while no CO2 is created, the tert-butyl hydroperoxide is highly reactive, flammable, and toxic, so it must be handled with extreme care. In short, current methods of epoxide synthesis produce CO2, involve dangerous chemicals, require huge pressure vessels, or call for fossil fuel combustion. Manthiram and his team believed there must be a better way. A new approach The goal in epoxide synthesis is straightforward: Simply transfer an oxygen atom from a source molecule onto an olefin molecule. Manthiram and his lab came up with an idea: Could water be used as a sustainable and benign source of the needed oxygen atoms? The concept was counterintuitive. “Organic chemists would say that it shouldn’t be possible because water and olefins don’t react with one another,” he says. “But what if we use electricity to liberate the oxygen atoms in water? Electrochemistry causes interesting things to happen — and it’s at the heart of what our group does.” Using electricity to split water into oxygen and hydrogen is a standard practice called electrolysis. Usually, the goal of water electrolysis is to produce hydrogen gas for certain industrial applications or for use as a fuel. The oxygen is simply vented to the atmosphere. To Manthiram, that practice seemed wasteful. Why not do something useful with the oxygen? Making an epoxide seemed the perfect opportunity — and the benefits could be significant. Generating two valuable products instead of one would bring down the high cost of water electrolysis. Indeed, it might become a cheaper, carbon-free alternative to today’s usual practice of producing hydrogen from natural gas. The electricity needed for the process could be generated from renewable sources such as solar and wind. There wouldn’t be any hazardous reactants or undesirable byproducts involved. And there would be no need for massive, costly, and accident-prone pressure vessels. As a result, epoxides could be made at small-scale, modular facilities close to the place they’re going to be used — no need to transport, distribute, or store the chemicals produced. Will the reaction work? However, there was a chance that the proposed process might not work. During electrolysis, the oxygen atoms quickly pair up to form oxygen gas. The proposed process — illustrated in Figure 2 in the slideshow above — would require that some of the oxygen atoms move onto the olefin before they combine with one another. To investigate the feasibility of the process, Manthiram’s group performed a fundamental analysis to find out whether the reaction is thermodynamically favorable. Does the energy of the overall system shift to a lower state by making the move? In other words, is the product more stable than the reactants were? They started with a thermodynamic analysis of the proposed reaction at various combinations of temperature and pressure — the standard variables used in hydrocarbon processing. As an example, they again used ethylene oxide. The results, shown in Figure 3 in the slideshow above, were not encouraging. As the uniform blue in the left-hand figure shows, even at elevated temperatures and pressures, the conversion of ethylene and water to ethylene oxide plus hydrogen doesn’t happen — just as a chemist’s intuition would predict. But their proposal was to use voltage rather than pressure to drive the chemical reaction. As the right-hand figure in Figure 3 shows, with that change, the outcome of the analysis looked more promising. Conversion of ethylene to ethylene oxide occurs at around 0.8 volts. So the process is viable at voltages below that of an everyday AA battery and at essentially room temperature. While a thermodynamic analysis can show that a reaction is possible, it doesn’t reveal how quickly it will occur, and reactions must be fast to be cost-effective. So the researchers needed to design a catalyst — a material that would speed up the reaction without getting consumed. Designing catalysts for specific electrochemical reactions is a focus of Manthiram’s group. For this reaction, they decided to start with manganese oxide, a material known to catalyze the water-splitting reaction. And to increase the catalyst’s effectiveness, they fabricated it into nanoparticles — a particle size that would maximize the surface area on which reactions can take place. Figure 4 in the slideshow above shows the special electrochemical cell they designed. Like all such cells, it has two electrodes — in this case, an anode where oxygen is transferred to make an olefin into an epoxide, and a cathode where hydrogen gas forms. The anode is made of carbon paper decorated with the nanoparticles of manganese oxide (shown in yellow). The cathode is made of platinum. Between the anode and the cathode is an electrolyte that ferries electrically charged ions between them. In this case, the electrolyte is a mixture of a solvent, water (the oxygen source), and the olefin. The magnified views in Figure 4 show what happens at the two electrodes. The right-hand view shows the olefin and water (H2O) molecules arriving at the anode surface. Encouraged by the catalyst, the water molecules break apart, sending two electrons (negatively charged particles, e–) into the anode and releasing two protons (positively charged hydrogen ions, H+) into the electrolyte. The leftover oxygen atom (O) joins the olefin molecule on the surface of the electrode, forming the desired epoxide molecule. The two liberated electrons travel through the anode and around the external circuit (shown in red), where they pass through a power source — ideally, fueled by a renewable source such as wind or solar—and gain extra energy. When the two energized electrons reach the cathode, they join the two protons arriving in the electrolyte and — as shown in the left-hand magnified view — they form hydrogen gas (H2), which exits the top of the cell. Experimental results Experiments with that setup have been encouraging. Thus far, the work has involved an olefin called cyclooctene, a well-known molecule that’s been widely used by people studying oxidation reactions. “Ethylene and the like are structurally more important and need to be solved, but we’re developing a foundation on a well-known molecule just to get us started,” says Manthiram. Results have already allayed a major concern. In one test, the researchers applied 3.8 volts across their mixture at room temperature, and, after four hours, about half of the cyclooctene had converted into its epoxide counterpart, cyclooctene oxide. “So that result confirms that we can split water to make hydrogen and oxygen and then intercept the oxygen atoms so they move onto the olefin and convert it into an epoxide,” says Manthiram. But how efficiently does the conversion happen? If this reaction is perfectly efficient, one oxygen atom will move onto an olefin for every two electrons that go into the anode. Thus, one epoxide molecule will form for each hydrogen molecule that forms. Using special equipment, the researchers counted the number of epoxide molecules formed for each pair of electrons passing through the external circuit to form hydrogen. That analysis showed that their conversion efficiency was 30 percent of the maximum theoretical efficiency. “That’s because the electrons are also doing other reactions — maybe making oxygen, for instance, or oxidizing some of the solvent,” says Manthiram. “But for us, 30 percent is a remarkable number for a new reaction that was previously unknown. For that to be the first step, we’re very happy about it.” Manthiram recognizes that the efficiency might need to be twice as high, or even higher, for the process to be commercially viable. “Techno-economics will ultimately guide where that number needs to be,” he says. “But I would say that the heart of our discoveries so far is the realization that there is a catalyst that can make this happen. That’s what has opened up everything that we’ve explored since the initial discovery.” Encouraging results and future challenges Manthiram is cautious not to overstate the potential implications of the work. “We know what the outcome is,” he says. “We put olefin in, and we get epoxide out.” But to optimize the conversion efficiency they need to know at a molecular level all the steps involved in that conversion. For example, does the electron transfer first by itself, or does it move with a proton at the same time? How does the catalyst bind the oxygen atom? And how does the oxygen atom transfer to the olefin on the surface of the catalyst? According to Manthiram, he and his group have hypothesized a reaction sequence, and several analytical techniques have provided a “handful of observables” that support it. But he admits that there is much more theoretical and experimental work to do to develop and validate a detailed mechanism that they can use to guide the optimization process. And then there are practical considerations, such as how to extract the epoxides from the electrochemical cell and how to scale up production. Manthiram believes that this work on epoxides is just “the tip of the iceberg” for his group. There are many other chemicals they might be able to make using voltage and specially designed catalysts. And while some attempts may not work, with each one they’ll learn more about how voltages and electrons and surfaces influence the outcome. He and his team predict that the face of the chemical industry will change dramatically in the years to come. The need to reduce CO2 emissions and energy use is already pushing research on chemical manufacturing toward using electricity from renewable sources. And that electricity will increasingly be made at distributed sites. “If we have solar panels and wind turbines everywhere, why not do chemical synthesis close to where the power is generated, and make commercial products close to the communities that need them?” says Manthiram. The result will be a distributed, electrified, and decarbonized chemical industry — and a dramatic reduction in both energy use and CO2 emissions. This research was supported by MIT’s Department of Chemical Engineering and by National Science Foundation Graduate Research Fellowships. This article appears in the Autumn 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Assistant Professor Karthish Manthiram (center), postdoc Kyoungsuk Jin (right), graduate student Joseph Maalouf (left), and their colleagues are working to help decarbonize the chemical industry by finding ways to drive critical chemical reactions using electricity from renewable sources. Photo: Stuart Darsch An MIT team has devised a lithium metal anode that could improve the longevity and energy density of future batteries. Mon, 03 Feb 2020 10:59:59 -0500 David L. Chandler | MIT News Office New research by engineers at MIT and elsewhere could lead to batteries that can pack more power per pound and last longer, based on the long-sought goal of using pure lithium metal as one of the battery’s two electrodes, the anode.  The new electrode concept comes from the laboratory of Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering. It is described today in the journal Nature, in a paper co-authored by Yuming Chen and Ziqiang Wang at MIT, along with 11 others at MIT and in Hong Kong, Florida, and Texas.The design is part of a concept for developing safe all-solid-state batteries, dispensing with the liquid or polymer gel usually used as the electrolyte material between the battery’s two electrodes. An electrolyte allows lithium ions to travel back and forth during the charging and discharging cycles of the battery, and an all-solid version could be safer than liquid electrolytes, which have high volatilility and have been the source of explosions in lithium batteries.“There has been a lot of work on solid-state batteries, with lithium metal electrodes and solid electrolytes,” Li says, but these efforts have faced a number of issues.One of the biggest problems is that when the battery is charged up, atoms accumulate inside the lithium metal, causing it to expand. The metal then shrinks again during discharge, as the battery is used. These repeated changes in the metal’s dimensions, somewhat like the process of inhaling and exhaling, make it difficult for the solids to maintain constant contact, and tend to cause the solid electrolyte to fracture or detach.Another problem is that none of the proposed solid electrolytes are truly chemically stable while in contact with the highly reactive lithium metal, and they tend to degrade over time.Most attempts to overcome these problems have focused on designing solid electrolyte materials that are absolutely stable against lithium metal, which turns out to be difficult.  Instead, Li and his team adopted an unusual design that utilizes two additional classes of solids, “mixed ionic-electronic conductors” (MIEC) and “electron and Li-ion insulators” (ELI), which are absolutely chemically stable in contact with lithium metal.The researchers developed a three-dimensional nanoarchitecture in the form of a honeycomb-like array of hexagonal MIEC tubes, partially infused with the solid lithium metal to form one electrode of the battery, but with extra space left inside each tube. When the lithium expands in the charging process, it flows into the empty space in the interior of the tubes, moving like a liquid even though it retains its solid crystalline structure. This flow, entirely confined inside the honeycomb structure, relieves the pressure from the expansion caused by charging, but without changing the electrode’s outer dimensions or the boundary between the electrode and electrolyte. The other material, the ELI, serves as a crucial mechanical binder between the MIEC walls and the solid electrolyte layer.“We designed this structure that gives us three-dimensional electrodes, like a honeycomb,” Li says. The void spaces in each tube of the structure allow the lithium to “creep backward” into the tubes, “and that way, it doesn’t build up stress to crack the solid electrolyte.” The expanding and contracting lithium inside these tubes moves in and out, sort of like a car engine’s pistons inside their cylinders. Because these structures are built at nanoscale dimensions (the tubes are about 100 to 300 nanometers in diameter, and tens of microns in height), the result is like “an engine with 10 billion pistons, with lithium metal as the working fluid,” Li says.Because the walls of these honeycomb-like structures are made of chemically stable MIEC, the lithium never loses electrical contact with the material, Li says. Thus, the whole solid battery can remain mechanically and chemically stable as it goes through its cycles of use. The team has proved the concept experimentally, putting a test device through 100 cycles of charging and discharging without producing any fracturing of the solids. Reversible Li metal plating and stripping in a carbon tubule with an inner diameter of 100nm. Courtesy of the researchers.Li says that though many other groups are working on what they call solid batteries, most of those systems actually work better with some liquid electrolyte mixed with the solid electrolyte material. “But in our case,” he says, “it’s truly all solid. There is no liquid or gel in it of any kind.”  The new system could lead to safe anodes that weigh only a quarter as much as their conventional counterparts in lithium-ion batteries, for the same amount of storage capacity. If combined with new concepts for lightweight versions of the other electrode, the cathode, this work could lead to substantial reductions in the overall weight of lithium-ion batteries. For example, the team hopes it could lead to cellphones that could be charged just once every three days, without making the phones any heavier or bulkier.One new concept for a lighter cathode was described by another team led by Li, in a paper that appeared last month in the journal Nature Energy, co-authored by MIT postdoc Zhi Zhu and graduate student Daiwei Yu. The material would reduce the use of nickel and cobalt, which are expensive and toxic and used in present-day cathodes. The new cathode does not rely only on the capacity contribution from these transition-metals in battery cycling. Instead, it would rely more on the redox capacity of oxygen, which is much lighter and more abundant. But in this process the oxygen ions become more mobile, which can cause them to escape from the cathode particles. The researchers used a high-temperature surface treatment with molten salt to produce a protective surface layer on particles of manganese- and lithium-rich metal-oxide, so the amount of oxygen loss is drastically reduced.Even though the surface layer is very thin, just 5 to 20 nanometers thick on a 400 nanometer-wide particle, it provides good protection for the underlying material. “It’s almost like immunization,” Li says, against the destructive effects of oxygen loss in batteries used at room temperature. The present versions provide at least a 50 percent improvement in the amount of energy that can be stored for a given weight, with much better cycling stability.The team has only built small lab-scale devices so far, but “I expect this can be scaled up very quickly,” Li says. The materials needed, mostly manganese, are significantly cheaper than the nickel or cobalt used by other systems, so these cathodes could cost as little as a fifth as much as the conventional versions.The research teams included researchers from MIT, Hong Kong Polytechnic University, the University of Central Florida, the University of Texas at Austin, and Brookhaven National Laboratories in Upton, New York. The work was supported by the National Science Foundation. New research by engineers at MIT and elsewhere could lead to batteries that can pack more power per pound and last longer. Credit: MIT News Fikile Brushett and his team are designing electrochemical technology to secure the planet’s energy future. Wed, 29 Jan 2020 09:00:00 -0500 Zain Humayun | School of Engineering Before Fikile Brushett wanted to be an engineer, he wanted to be a soccer player. Today, however, Brushett is the Cecil and Ida Green Career Development Associate Professor in the Department of Chemical Engineering. Building 66 might not look much like a soccer field, but Brushett says the sport taught him a fundamental lesson that has proved invaluable in his scientific endeavors. “The teams that are successful are the teams that work together,” Brushett says. That philosophy inspires the Brushett Research Group, which draws on disciplines as diverse as organic chemistry and economics to create new electrochemical processes and devices. As the world moves toward cleaner and sustainable sources of energy, one of the major challenges is converting efficiently between electrical and chemical energy. This is the challenge undertaken by Brushett and his colleagues, who are trying to push the frontiers of electrochemical technology. Brushett’s research focuses on ways to improve redox flow batteries, which are potentially low-cost alternatives to conventional batteries and a viable way of storing energy from renewable sources like wind and the sun. His group also explores means to recycle carbon dioxide — a greenhouse gas — into fuels and useful chemicals, and to extract energy from biomass. In his work, Brushett is helping to transform every stage of the energy pipeline: from unlocking the potential of solar and wind energy to replacing combustion engines with fuel cells, and even enabling greener industrial processes. “A lot of times, electrochemical technologies work in some areas, but we’d like them to work much more broadly than we’ve asked them to do beforehand,” Brushett says. “A lot of that is now driving the need for new innovation in the area, and that’s where we come in.” Fikile Brushett is the Cecil and Ida Green Career Development Associate Professor in the Department of Chemical Engineering. Photo: Lillie Paquette/School of Engineering Solar panel costs have dropped lately, but slimming down silicon wafers could lead to even lower costs and faster industry expansion. Sun, 26 Jan 2020 23:59:59 -0500 David L. Chandler | MIT News Office Costs of solar panels have plummeted over the last several years, leading to rates of solar installations far greater than most analysts had expected. But with most of the potential areas for cost savings already pushed to the extreme, further cost reductions are becoming more challenging to find.Now, researchers at MIT and at the National Renewable Energy Laboratory (NREL) have outlined a pathway to slashing costs further, this time by slimming down the silicon cells themselves.Thinner silicon cells have been explored before, especially around a dozen years ago when the cost of silicon peaked because of supply shortages. But this approach suffered from some difficulties: The thin silicon wafers were too brittle and fragile, leading to unacceptable levels of losses during the manufacturing process, and they had lower efficiency. The researchers say there are now ways to begin addressing these challenges through the use of better handling equipment and some recent developments in solar cell architecture.The new findings are detailed in a paper in the journal Energy and Environmental Science, co-authored by MIT postdoc Zhe Liu, professor of mechanical engineering Tonio Buonassisi, and five others at MIT and NREL.The researchers describe their approach as “technoeconomic,” stressing that at this point economic considerations are as crucial as the technological ones in achieving further improvements in affordability of solar panels.Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year, the researchers say. Today’s silicon photovoltaic cells, the heart of these solar panels, are made from wafers of silicon that are 160 micrometers thick, but with improved handling methods, the researchers propose this could be shaved down to 100 micrometers —  and eventually as little as 40 micrometers or less, which would only require one-fourth as much silicon for a given size of panel.That could not only reduce the cost of the individual panels, they say, but even more importantly it could allow for rapid expansion of solar panel manufacturing capacity. That’s because the expansion can be constrained by limits on how fast new plants can be built to produce the silicon crystal ingots that are then sliced like salami to make the wafers. These plants, which are generally separate from the solar cell manufacturing plants themselves, tend to be capital-intensive and time-consuming to build, which could lead to a bottleneck in the rate of expansion of solar panel production. Reducing wafer thickness could potentially alleviate that problem, the researchers say.The study looked at the efficiency levels of four variations of solar cell architecture, including PERC (passivated emitter and rear contact) cells and other advanced high-efficiency technologies, comparing their outputs at different thickness levels. The team found there was in fact little decline in performance down to thicknesses as low as 40 micrometers, using today’s improved manufacturing processes.“We see that there’s this area (of the graphs of efficiency versus thickness) where the efficiency is flat,” Liu says, “and so that’s the region where you could potentially save some money.” Because of these advances in cell architecture, he says, “we really started to see that it was time to revisit the cost benefits.”Changing over the huge panel-manufacturing plants to adapt to the thinner wafers will be a time-consuming and expensive process, but the analysis shows the benefits can far outweigh the costs, Liu says. It will take time to develop the necessary equipment and procedures to allow for the thinner material, but with existing technology, he says, “it should be relatively simple to go down to 100 micrometers,” which would already provide some significant savings. Further improvements in technology such as better detection of microcracks before they grow could help reduce thicknesses further.In the future, the thickness could potentially be reduced to as little as 15 micrometers, he says. New technologies that grow thin wafers of silicon crystal directly rather than slicing them from a larger cylinder could help enable such further thinning, he says.Development of thin silicon has received little attention in recent years because the price of silicon has declined from its earlier peak. But, because of cost reductions that have already taken place in solar cell efficiency and other parts of the solar panel manufacturing process and supply chain, the cost of the silicon is once again a factor that can make a difference, he says.“Efficiency can only go up by a few percent. So if you want to get further improvements, thickness is the way to go,” Buonassisi says. But the conversion will require large capital investments for full-scale deployment.The purpose of this study, he says, is to provide a roadmap for those who may be planning expansion in solar manufacturing technologies. By making the path “concrete and tangible,” he says, it may help companies incorporate this in their planning. “There is a path,” he says. “It’s not easy, but there is a path. And for the first movers, the advantage is significant.”What may be required, he says, is for the different key players in the industry to get together and lay out a specific set of steps forward and agreed-upon standards, as the integrated circuit industry did early on to enable the explosive growth of that industry. “That would be truly transformative,” he says.Andre Augusto, an associate research scientist at Arizona State University who was not connected with this research, says “refining silicon and wafer manufacturing is the most capital-expense (capex) demanding part of the process of manufacturing solar panels. So in a scenario of fast expansion, the wafer supply can become an issue. Going thin solves this problem in part as you can manufacture more wafers per machine without increasing significantly the capex.” He adds that “thinner wafers may deliver performance advantages in certain climates,” performing better in warmer conditions.Renewable energy analyst Gregory Wilson of Gregory Wilson Consulting, who was not associated with this work, says “The impact of reducing the amount of silicon used in mainstream cells would be very significant, as the paper points out. The most obvious gain is in the total amount of capital required to scale the PV industry to the multi-terawatt scale required by the climate change problem. Another benefit is in the amount of energy required to produce silicon PV panels. This is because the polysilicon production and ingot growth processes that are required for the production of high efficiency cells are very energy intensive.”Wilson adds “Major PV cell and module manufacturers need to hear from credible groups like Prof. Buonassisi’s at MIT, since they will make this shift when they can clearly see the economic benefits.”The team also included Sarah Sofia, Hannu Lane, Sarah Wieghold and Marius Peters at MIT and Michael Woodhouse at NREL. The work was partly supported by the U.S. Department of Energy, the Singapore-MIT Alliance for Research and Technology (SMART), and by a Total Energy Fellowship through the MIT Energy Initiative. Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year. Assistant Professor Sili Deng is on a quest to understand the chemistry involved in combustion and develop strategies to make it cleaner. Thu, 23 Jan 2020 15:15:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering Much of the conversation around energy sustainability is dominated by clean-energy technologies like wind, solar, and thermal. However, with roughly 80 percent of energy use in the United States coming from fossil fuels, combustion remains the dominant method of energy conversion for power generation, electricity, and transportation. “People think of combustion as a dirty technology, but it’s currently the most feasible way to produce electricity and power,” explains Sili Deng, assistant professor of mechanical engineering and the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor. Deng is working toward understanding the chemistry and flow that interacts in combustion in an effort to improve technologies for current or near-future energy conversion applications. “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” she adds. Deng’s interest in combustion stemmed from a conversation she had with a friend before applying to Tsinghua University for undergraduate study. “One day, I was talking about my dream school and major with a friend and she said ‘What if you could increase the efficiency of energy utilization by just 1 percent?’” recalls Deng. “Considering how much energy we use globally each year, you could make a huge difference.” This discussion inspired Deng to study combustion. After graduating with a bachelor’s degree in thermal engineering, she received her master’s and PhD from Princeton University. At Princeton, Deng focused on the how the coupling effects of chemistry and flow influence combustion and emissions. “The details of combustion are much more complicated than our general understanding of fuel and air combining to form water, carbon dioxide, and heat,” Deng explains. “There are hundreds of chemical species and thousands of reactions involved, depending on the type of fuel, fuel-air mixing, and flow dynamics.” Along with her team at the Deng Energy and Nanotechnology Group at MIT, she hopes that understanding chemically reacting flow in the combustion process will result in new strategies to control the process of combustion and reduce or eliminate the soot generated in combustion.  “My group utilizes both experimental and computational tools to build a fundamental understanding of the combustion process that can guide the design of combustors for high performance and low emissions,” Deng adds. Her team is also utilizing artificial intelligence algorithms along with physical models to predict — and hopefully control — the combustion process. By understanding and controlling the combustion process, Deng is uncovering more about how soot, combustion’s most notorious by-product, is created. “Once soot leaves the site of combustion, it is difficult to contain. There isn’t much you can do to prevent haze or smog from developing,” she explains. The production of soot starts within the flame itself — even on a small scale, such as burning a candle. As Deng describes it, a “chemical soup” of hydrocarbons, vapor, melting wax, and oxygen interact to create soot particles visible as the yellow glow surrounding a candle light. “By understanding exactly how this soot is generated within a flame, we’re hoping to develop methods to reduce or eliminate it before it gets out of the combustion channel,” says Deng. Deng’s research on flames extends beyond the formation of soot. By developing a technology called flame synthesis, she is working on producing nanomaterials that can be used for renewable energy applications. The process of synthesizing nanomaterials via flames shares similarities with the soot formation in flames. Instead of generating the byproducts of incomplete combustion, certain precursors are added to the flame, which result in the production of nanomaterials. One common example of using flame synthesis to create nanomaterials is the production of titanium dioxide, a white pigment often used in paint and sunscreen.  “I’m hoping to create a similar type of reaction to develop new materials that can be used for things like renewable energy, water treatment, pollution reduction, and catalysts,” she explains. Her team has been tweaking the various parameters of combustion — from temperature to the type of fuel used — to create nanomaterials that could eventually be used to clean up other, more nefarious byproducts created in combustion. To be successful in her quest to make combustion cleaner, Deng acknowledges that collaboration will be key. “There’s an opportunity to combine the fundamental research on combustion that my lab is doing with the materials, devices, and products being developed across areas like materials science and automotive engineering,” she says. Since we may be decades away from transitioning to a grid powered by renewable resources like solar, wave, and wind, Deng is helping carve out an important role for fellow combustion scientists. “While clean-energy technologies are continuing to be developed, it’s crucial that we continue to work toward finding ways to improve combustion technologies,” she adds. “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” says Sili Deng, assistant professor of mechanical engineering at MIT. Photo: Tony Pulsone Workshop highlights how MIT research can guide adaptation at local, regional, and national scales. Thu, 23 Jan 2020 15:15:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change Five-hundred-year floods. Persistent droughts and heat waves. More devastating wildfires. As these and other planetary perils become more commonplace, they pose serious risks to natural, managed, and built environments around the world. Assessing the magnitude of these risks over multiple decades and identifying strategies to prepare for them at local, regional, and national scales will be essential to making societies and economies more resilient and sustainable. With that goal in mind, the MIT Joint Program on the Science of Global Change launched in 2019 its Adaptation-at-Scale initiative (AS-MIT), which seeks evidence-based solutions to global change-driven risks. Using its Integrated Global System Modeling (IGSM) framework, as well as a suite of resource and infrastructure assessment models, AS-MIT targets, diagnoses, and projects changing risks to life-sustaining resources under impending societal and environmental stressors, and evaluates the effectiveness of potential risk-reduction measures.   In pursuit of these objectives, MIT Joint Program researchers are collaborating with other adaptation-at-scale thought leaders across MIT. And at a conference on Jan. 10 on the MIT campus, they showcased some of their most promising efforts in this space. Part of a series of MIT Joint Program workshops aimed at providing decision-makers with actionable information on key global change concerns, the conference covered risks and resilience strategies for food, energy, and water systems; urban-scale solutions; predicting the evolving risk of extreme events; and decision-making and early warning capabilities — and featured a lunch seminar on renewable energy for resilience and adaptation by an expert from the National Renewable Energy Laboratory. Food, energy, and water systems Greg Sixt, research manager in the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), described the work of J-WAFS’ Alliance for Climate Change and Food Systems Research, an emerging alliance of premier research institutions and key stakeholders to collaboratively frame challenges, identify research paths, and fund and pursue convergence research on building more resilience across the food system, from production to supply chains to consumption. MIT Joint Program Deputy Director Sergey Paltsev, also a senior research scientist at the MIT Energy Initiative (MITEI), explored climate-related risks to energy systems. He highlighted physical risks, such as potential impacts of permafrost degradation on roads, airports, natural gas pipelines, and other infrastructure in the Arctic, and of an increase in extreme temperature, wind, and icing events on power distribution infrastructure in the U.S. Northeast. “No matter what we do in terms of climate mitigation, the physical risks will remain the same for decades because of inertia in the climate system,” says Paltsev. “Even with very aggressive emissions-reduction policies, decision-makers must take physical risks into consideration.” They must also account for transition risks — long-term financial and investment risks to fossil fuel infrastructure posed by climate policies. Paltsev showed how energy scenarios developed at MIT and elsewhere can enable decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy. MIT Joint Program Deputy Director Adam Schlosser discussed MIT Joint Program (JP) efforts to assess risks to, and optimal adaptation strategies for, water systems subject to drought, flooding, and other challenges impacting water availability and quality posed by a changing environment. Schlosser noted that in some cases, efficiency improvements can go a long way in meeting these challenges, as shown in one JP study that found improving municipal and industrial efficiencies was just as effective as climate mitigation in confronting projected water shortages in Asia. Finally, he introduced a new JP project funded by the U.S. Department of Energy that will explore how in U.S. floodplains, foresight could increase resilience to future forces, stressors, and disturbances imposed by nature and human activity. “In assessing how we avoid and adapt to risk, we need to think about all plausible futures,” says Schlosser. “Our approach is to take all [of those] futures, put them into our [integrated global] system of human and natural systems, and think about how we use water optimally.” Urban-scale solutions Brian Goldberg, assistant director of the MIT Office of Sustainability, detailed MIT’s plans to sustain MIT campus infrastructure amid intensifying climate disruptions and impacts over the next 100 years. Toward that end, the MIT Climate Resiliency Committee is working to shore up multiple, interdependent layers of resiliency that include the campus site, infrastructure and utilities, buildings, and community, and creating modeling tools to evaluate flood risk. “We’re using the campus as a testbed to develop solutions, advance research, and ultimately grow a more climate-resilient campus,” says Goldberg. “Perhaps the models we develop and engage with at the campus scale can then influence the city or region scale and then be shared globally.” MIT Joint Program/MITEI Research Scientist Mei Yuan described an upcoming study to assess the potential of the building sector to reduce its greenhouse gas emissions through more energy-efficient design and intelligent telecommunications — and thereby lower climate-related risk to urban infrastructure. Yuan aims to achieve this objective by linking the program’ s U.S. Regional Energy Policy (USREP) model with a detailed building sector model that explicitly represents energy-consuming technologies (e.g., for heating, cooling, lighting, and household appliances).  “Incorporating this building sector model within an integrated framework that combines USREP with an hourly electricity dispatch model (EleMod) could enable us to simulate the supply and demand of electricity at finer spatial and temporal resolution,” says Yuan, “and thereby better understand how the power sector will need to adapt to future energy needs.” Renewable energy for resilience and adaptation Jill Engel-Cox, director of NREL’s Joint Institute for Strategic Energy Analysis, presented several promising adaptation measures for energy resilience that incorporate renewables. These include placing critical power lines underground; increasing demand-side energy efficiency to decrease energy consumption and power system instability; diversifying generation so electric power distribution can be sustained when one power source is down; deploying distributed generation (e.g., photovoltaics, small wind turbines, energy storage systems) so that if one part of the grid is disconnected, other parts continue to function; and implementing smart grids and micro-grids. “Adaptation and resilience measures tend to be very localized,” says Engel-Cox. “So we need to come up with strategies that will work for particular locations and risks.” These include storm-proofing photovoltaics and wind turbine systems, deploying hydropower with greater flexibility to account for variability in water flow, incorporating renewables in planning for natural gas system outages, and co-locating wind and PV systems on agricultural land. Extreme events MIT Joint Program Principal Research Scientist Xiang Gao showed how a statistical method that she developed has produced predictions of the risk of heavy precipitation, heat waves, and other extreme weather events that are more consistent with observations than conventional climate models do. Known as the “analog method,” the technique detects extreme events based on large-scale atmospheric patterns associated with such events. “Improved prediction of extreme weather events enabled by the analog method offers a promising pathway to provide meaningful climate mitigation and adaptation actions,” says Gao. Sai Ravela, a principal research scientist at MIT’s Department of Earth, Atmospheric and Planetary Sciences, showed how artificial intelligence could be exploited to predict extreme events. Key methods that Ravela and his research group are developing combine climate statistics, atmospheric modeling, and physics to assess the risk of future extreme events. The group’s long-range predictions draw upon deep learning and small-sample statistics using local sensor data and global oscillations. Applying these methods, Ravela and his co-investigators are developing a model to assess the risk of extreme weather events to infrastructure, such as that of wind and flooding damage to a nuclear plant or city.  Decision-making and early warning capabilities MIT Joint Program/MITEI Research Scientist Jennifer Morris explored uncertainty and decision-making for adaptation to global change-driven challenges ranging from coastal adaptation to grid resilience. Morris described the MIT Joint Program approach as a four-step process: quantify stressors and influences, evaluate vulnerabilities, identify response options and transition pathways, and develop decision-making frameworks. She then used the following Q&A to show how this four-pronged approach can be applied to the case of grid resilience. Q: Do human-induced changes in damaging weather events present a rising, widespread risk of premature failure in the nation’s power grid — and, if so, what are the cost-effective near-term actions to hedge against that risk? A: First, identify critical junctures within power grid, starting with large power transformers (LPTs). Next, use an analogue approach (described above) to construct distribution of expected changes in extreme heat wave events which would be damaging to LPTs under different climate scenarios. Next, use energy-economic and electric power models to assess electricity demand and economic costs related to LPT failure. And finally, make decisions under uncertainty to identify near-term actions to mitigate risks of LPT failure (e.g., upgrading or replacing LPTs). John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, highlighted the group’s efforts to combine advanced remote sensing and decision support systems to assess the impacts of natural disasters, support hurricane evacuation decision-making, and guide proactive climate adaptation and resilience. Lincoln Laboratory is collaborating with MIT campus partners to develop the Climate Resilience Early Warning System Network (CREWSNET), which draws on MIT strengths in cutting-edge climate forecasting, impact models, and applied decision support tools to empower climate resilience and adaptation on a global scale. “From extreme event prediction to scenario-based risk analysis, this workshop showcased the core capabilities of the joint program and its partners across MIT that can advance scalable solutions to adaptation challenges across the globe,” says Adam Schlosser, who coordinated the day’s presentations. “Applying leading-edge modeling tools, our research is well-positioned to provide decision-makers with guidance and strategies to build a more resilient future.” An Army Corps of Engineers flood model depicting the Ala Wai watershed after a 100-year rain event. The owner of a local design firm described the Ala Wai Flood Control Project as the largest climate impact project in Hawai’s modern history. Image: U.S. Army Corps of Engineers-Honolulu District Students in class 2.S999 (Solving for Carbon Neutrality at MIT) are charged with developing plans to make MIT’s campus carbon neutral by 2060. Fri, 17 Jan 2020 09:50:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering While so many faculty and researchers at MIT are developing technologies to reduce carbon emissions and increase energy sustainability, one class puts the power in students’ hands. In 2.S999 (Solving for Carbon Neutrality at MIT), teams of students are tasked with developing a plan to achieve carbon neutrality on MIT’s campus by 2060. “It’s a ‘roll up your sleeves and solve a real problem’ kind of class,” says Timothy Gutowski, professor of mechanical engineering and co-instructor for the class. In nearly every class, students hear from guest lecturers who offer their own expert views on energy sustainability and carbon emissions. In addition to faculty and staff from across MIT, guest lecturers include local government officials, industry specialists, and economists. Whether it’s the science and ethics behind climate change, the evolution of the electric grid, or the development of MIT’s upgraded Central Utilities Plant, these experts introduce students to considerations on a campus, regional, national, and global level. “It’s essential to expose students to these different perspectives so they understand the complexity and the multidisciplinary nature of this challenge,” says Julie Newman, director of MIT’s Office of Sustainability and co-instructor. In one class, students get the opportunity to embody different perspectives through a debate about the installation of an offshore wind farm near a small coastal town. Each student is given a particular role to play in a debate. Caroline Boone, a junior studying mechanical engineering, played the role of a beachfront property owner who objected to the installation. “It was a really good way of grasping how those negotiations happen in the real world,” recalls Boone. “The fact of the matter is, you’re going to have to work with groups who have their own interests — that requires compromise and negotiation.” Armed with these negotiation skills, along with insights from different experts, students are divided into teams and charged with developing a strategy that outlines year-by-year how MIT can achieve carbon neutrality by 2060. “The final project uses the campus as a test bed for engaging and exposing students to the complexity of solving for these global issues in their own backyard,” Newman adds. Student teams took a number of approaches in their strategies to achieve carbon neutrality. Tom Hubschman’s team focused on the immediate impact MIT could have through power purchase agreements — also known as PPAs. “Our team quickly realized that, given the harsh New England environment and the limited space on campus, building a giant solar or wind farm in the middle of Cambridge wasn’t a sound strategy,” says Hubschman, a mechanical engineering graduate student. Instead, his team built their strategy around replicating MIT’s current PPA that has resulted in the construction of a 650-acre solar farm in North Carolina.  Boone’s team, meanwhile, took a different approach, developing a plan that didn’t include PPAs. “Our team was a bit contrarian in not having any PPAs, but we thought it was important to have that contrasting perspective,” she explains. Boone’s role within her team was to examine building energy use on campus. One takeaway from her research was the need for better controls and sensors to ensure campus buildings are running more efficiently. Regardless of their approach, each team had to deal with a level of uncertainty with regard to the efficiency of New England’s electric grid. “Right now, the electricity produced by MIT’s own power plant emits less carbon than the current grid,” adds Gutowski. “But the question is, as new regulations are put in place and new technologies are developed, when will there be a crossover in the grid emitting less carbon than our own power plant?” Students have to build this uncertainty into the predictive modeling for their proposed solutions.  In the two years that the class has been offered, student projects have been helpful in shaping the Office of Sustainability’s own strategy. “These projects have reinforced our calculations and confirmed our strategy of using PPAs to contribute to greenhouse gas reduction off-site as we work toward developing on-site solutions,” explains Newman. This spring, Gutowski and Newman will work with a number of universities in South America on launching similar classes for their curricula. They will visit Ecuador, Chile, and Columbia, encouraging university administrators to task their students with solving for carbon neutrality on their own campuses. Julie Newman, director of sustainability at MIT, says the final project for course 2.S999 “uses the campus as a test bed for engaging and exposing students to the complexity of solving [for] global issues in their own backyard.” Photo: Ken Richardson Wielding complex algorithms, nuclear science and engineering doctoral candidate Nestor Sepulveda spins out scenarios for combating climate change. Wed, 15 Jan 2020 00:00:00 -0500 Leda Zimmerman | Department of Nuclear Science and Engineering To avoid the most destructive consequences of climate change, the world’s electric energy systems must stop producing carbon by 2050. It seems like an overwhelming technological, political, and economic challenge — but not to Nestor Sepulveda. “My work has shown me that we do have the means to tackle the problem, and we can start now,” he says. “I am optimistic.” Sepulveda’s research, first as a master’s student and now as a doctoral candidate in the MIT Department of Nuclear Science and Engineering (NSE), involves complex simulations that describe potential pathways to decarbonization. In work published last year in the journal Joule, Sepulveda and his co-authors made a powerful case for using a mix of renewable and “firm” electricity sources, such as nuclear energy, as the least costly, and most likely, route to a low- or no-carbon grid. These insights, which flow from a unique computational framework blending optimization and data science, operations research, and policy methodologies, have attracted interest from The New York Times and The Economist, as well as from such notable players in the energy arena as Bill Gates. For Sepulveda, the attention could not come at a more vital moment. “Right now, people are at extremes: on the one hand worrying that steps to address climate change might weaken the economy, and on the other advocating a Green New Deal to transform the economy that depends solely on solar, wind, and battery storage,” he says. “I think my data-based work can help bridge the gap and enable people to find a middle point where they can have a conversation.” An optimization tool The computational model Sepulveda is developing to generate this data, the centerpiece of his dissertation research, was sparked by classroom experiences at the start of his NSE master’s degree. “In courses like Nuclear Technology and Society [22.16], which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” he says. “I began wondering how to determine the value of different technologies.” Recognizing that “absolutes exist in people’s minds, but not in reality,” Sepulveda sought to develop a tool that might yield an optimal solution to the decarbonization question. His inaugural effort in modeling focused on weighing the advantages of utilizing advanced nuclear reactor designs against exclusive use of existing light-water reactor technology in the decarbonization effort. “I showed that in spite of their increased costs, advanced reactors proved more valuable to achieving the low-carbon transition than conventional reactor technology alone,” he says. This research formed the basis of Sepulveda’s master’s thesis in 2016, for a degree spanning NSE and the Technology and Policy Program. It also informed the MIT Energy Initiative’s report, “The Future of Nuclear Energy in a Carbon-Constrained World.” The right stuff Sepulveda comes to the climate challenge armed with a lifelong commitment to service, an appetite for problem-solving, and grit. Born in Santiago, he enlisted in the Chilean navy, completing his high school and college education at the national naval academy. “Chile has natural disasters every year, and the defense forces are the ones that jump in to help people, which I found really attractive,” he says. He opted for the most difficult academic specialty, electrical engineering, over combat and weaponry. Early in his career, the climate change issue struck him, he says, and for his senior project, he designed a ship powered by hydrogen fuel cells. After he graduated, the Chilean navy rewarded his performance with major responsibilities in the fleet, including outfitting a $100 million amphibious ship intended for moving marines and for providing emergency relief services. But Sepulveda was anxious to focus fully on sustainable energy, and petitioned the navy to allow him to pursue a master’s at MIT in 2014. It was while conducting research for this degree that Sepulveda confronted a life-altering health crisis: a heart defect that led to open-heart surgery. “People told me to take time off and wait another year to finish my degree,” he recalls. Instead, he decided to press on: “I was deep into ideas about decarbonization, which I found really fulfilling.” After graduating in 2016, he returned to naval life in Chile, but “couldn’t stop thinking about the potential of informing energy policy around the world and making a long-lasting impact,” he says. “Every day, looking in the mirror, I saw the big scar on my chest that reminded me to do something bigger with my life, or at least try.” Convinced that he could play a significant role in addressing the critical carbon problem if he continued his MIT education, Sepulveda successfully petitioned naval superiors to sanction his return to Cambridge, Massachusetts. Simulating the energy transition Since resuming studies here in 2018, Sepulveda has wasted little time. He is focused on refining his modeling tool to play out the potential impacts and costs of increasingly complex energy technology scenarios on achieving deep decarbonization. This has meant rapidly acquiring knowledge in fields such as economics, math, and law. “The navy gave me discipline, and MIT gave me flexibility of mind — how to look at problems from different angles,” he says. With mentors and collaborators such as Associate Provost and Japan Steel Industry Professor Richard Lester and MIT Sloan School of Management professors Juan Pablo Vielma and Christopher Knittel, Sepulveda has been tweaking his models. His simulations, which can involve more than 1,000 scenarios, factor in existing and emerging technologies, uncertainties such as the possible emergence of fusion energy, and different regional constraints, to identify optimal investment strategies for low-carbon systems and to determine what pathways generate the most cost-effective solutions. “The idea isn’t to say we need this many solar farms or nuclear plants, but to look at the trends and value the future impact of technologies for climate change, so we can focus money on those with the highest impact, and generate policies that push harder on those,” he says. Sepulveda hopes his models won’t just lead the way to decarbonization, but do so in a way that minimizes social costs. “I come from a developing nation, where there are other problems like health care and education, so my goal is to achieve a pathway that leaves resources to address these other issues.” As he refines his computations with the help of MIT’s massive computing clusters, Sepulveda has been building a life in the United States. He has found a vibrant Chilean community at MIT and discovered local opportunities for venturing out on the water, such as summer sailing on the Charles. After graduation, he plans to leverage his modeling tool for the public benefit, through direct interactions with policy makers (U.S. congressional staffers have already begun to reach out to him), and with businesses looking to bend their strategies toward a zero-carbon future. It is a future that weighs even more heavily on him these days: Sepulveda is expecting his first child. “Right now, we’re buying stuff for the baby, but my mind keeps going into algorithmic mode,” he says. “I’m so immersed in decarbonization that I sometimes dream about it.” “In courses like Nuclear Technology and Society, which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” says doctoral student Nestor Sepulveda. “I began wondering how to determine the value of different technologies.” Photo: Gretchen Ertl A new study looks at how the global energy mix could change over the next 20 years. Thu, 09 Jan 2020 13:30:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change When it comes to fulfilling ambitious energy and climate commitments, few nations successfully walk their talk. A case in point is the Paris Agreement initiated four years ago. Nearly 200 signatory nations submitted voluntary pledges to cut their contribution to the world’s greenhouse gas emissions by 2030, but many are not on track to fulfill these pledges. Moreover, only a small number of countries are now pursuing climate policies consistent with keeping global warming well below 2 degrees Celsius, the long-term target recommended by the Intergovernmental Panel on Climate Change (IPCC).     This growing discrepancy between current policies and long-term targets — combined with uncertainty about individual nations’ ability to fulfill their commitments due to administrative, technological, and cultural challenges — makes it increasingly difficult for scientists to project the future of the global energy system and its impact on the global climate. Nonetheless, these projections remain essential for decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy. Toward that end, several expert groups continue to produce energy scenarios and analyze their implications for the climate. In a study in the journal Economics of Energy & Environmental Policy, Sergey Paltsev, deputy director of the MIT Joint Program on the Science and Policy of Global Change and a senior research scientist at the MIT Energy Initiative, collected projections of the global energy mix over the next two decades from several major energy-scenario producers. Aggregating results from scenarios developed by the MIT Joint Program, International Energy Agency, Shell, BP and ExxonMobil, and contrasting them with scenarios assessed by the IPCC that would be required to follow a pathway that limits global warming to 1.5 C, Paltsev arrived at three notable findings: 1. Fossil fuels decline, but still dominate. Assuming current Paris Agreement pledges are maintained beyond 2030, the share of fossil fuels in the global energy mix declines from approximately 80 percent today to 73-76 percent in 2040. In scenarios consistent with the 2 C goal, this share decreases to 56-61 percent in 2040. Meanwhile, the share of wind and solar rises from 2 percent today to 6-13 percent (current pledges) and further to 17-26 percent (2 C scenarios) in 2040. 2. Carbon capture waits in the wings. The multiple scenarios also show a mixed future for fossil fuels as the globe shifts away from carbon-intensive energy sources. Coal use does not have a sustainable future unless combined with carbon capture and storage (CCS) technology, and most near-term projections show no large-scale deployment of CCS in the next 10-15 years. Natural gas consumption, however, is likely to increase in the next 20 years, but also projected to decline thereafter without CCS. For pathways consistent with the “well below 2 C” goal, CCS scale-up by midcentury is essential for all carbon-emitting technologies.  3. Solar and wind thrive, but storage challenges remain. The scenarios show the critical importance of energy-efficiency improvements on the pace of the low-carbon transition but little consensus on the magnitude of such improvements. They do, however, unequivocally point to successful upcoming decades for solar and wind energy. This positive outlook is due to declining costs and escalating research and innovation in addressing intermittency and long-term energy storage challenges. While the scenarios considered in this study project an increased share of renewables in the next 20 years, they do not indicate anything close to a complete decarbonization of the energy system during that time frame. To assess what happens beyond 2040, the study concludes that decision-makers should be drawing upon a range of projections of plausible futures, because the dominant technologies of the near term may not prevail over the long term. “While energy projections are becoming more difficult because of the widening gulf between current policies and stated goals, they remain stakeholders’ sharpest tool in assessing the near- and long-term physical and financial risks associated with climate change and the world’s ongoing transition to a low-carbon energy system,” says Paltsev. “Combining the results from multiple sources provides additional insight into the evolution of the global energy mix.” The AES Corporation, based in Virginia, installed the world’s largest solar-plus-storage system on the southern end of the Hawaiian island of Kauai. A scaled-down version was first tested at the National Renewable Energy Laboratory. Photo: Dennis Schroeder/NREL Mechanical engineers are developing technologies that could prevent heat from entering or escaping windows, potentially preventing a massive loss of energy. Mon, 06 Jan 2020 15:30:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering In the quest to make buildings more energy efficient, windows present a particularly difficult problem. According to the U.S. Department of Energy, heat that either escapes or enters windows accounts for roughly 30 percent of the energy used to heat and cool buildings. Researchers are developing a variety of window technologies that could prevent this massive loss of energy. “The choice of windows in a building has a direct influence on energy consumption,” says Nicholas Fang, professor of mechanical engineering. “We need an effective way of blocking solar radiation.” Fang is part of a large collaboration that is working together to develop smart adaptive control and monitoring systems for buildings. The research team, which includes researchers from the Hong Kong University of Science and Technology and Leon Glicksman, professor of building technology and mechanical engineering at MIT, has been tasked with helping Hong Kong achieve its ambitious goal to reduce carbon emissions by 40 percent by 2025. “Our idea is to adapt new sensors and smart windows in an effort to help achieve energy efficiency and improve thermal comfort for people inside buildings,” Fang explains. His contribution is the development of a smart material that can be placed on a window as a film that blocks heat from entering. The film remains transparent when the surface temperature is under 32 degrees Celsius, but turns milky when it exceeds 32 C. This change in appearance is due to thermochromic microparticles that change phases in response to heat. The smart window’s milky appearance can block up to 70 percent of solar radiation from passing through the window, translating to a 30 percent reduction in cooling load.  In addition to this thermochromic material, Fang’s team is hoping to embed windows with sensors that monitor sunlight, luminance, and temperature. “Overall, we want an integral solution to reduce the load on HVAC systems,” he explains. Like Fang, graduate student Elise Strobach is working on a material that could significantly reduce the amount of heat that either escapes or enters through windows. She has developed a high-clarity silica aerogel that, when placed between two panes of glass, is 50 percent more insulating than traditional windows and lasts up to a decade longer. “Over the course of the past two years, we’ve developed a material that has demonstrated performance and is promising enough to start commercializing,” says Strobach, who is a PhD candidate in MIT’s Device Research Laboratory. To help in this commercialization, Strobach has co-founded the startup AeroShield Materials.  Lighter than a marshmallow, AeroShield’s material comprises 95 percent air. The rest of the material is made up of silica nanoparticles that are just 1-2 nanometers large. This structure blocks all three modes of heat loss: conduction, convection, and radiation. When gas is trapped inside the material’s small voids, it can no longer collide and transfer energy through convection. Meanwhile, the silica nanoparticles absorb radiation and re-emit it back in the direction it came from. “The material’s composition allows for a really intense temperature gradient that keeps the heat where you want it, whether it’s hot or cold outside,” explains Strobach, who, along with AeroShield co-founder Kyle Wilke, was named one of Forbes’ 30 Under 30 in Energy. Commercialization of this research is being supported by the MIT Deshpande Center for Technological Innovation. Strobach also sees possibilities for combining AeroShield technologies with other window solutions being developed at MIT, including Fang’s work and research being conducted by Gang Chen, Carl Richard Soderberg Professor of Power Engineering, and research scientist Svetlana Boriskina. “Buildings represent one third of U.S. energy usage, so in many ways windows are low-hanging fruit,” explains Chen. Chen and Boriskina previously worked with Strobach on the first iteration of the AeroShield material for their project developing a solar thermal aerogel receiver. More recently, they have developed polymers that could be used in windows or building facades to trap or reflect heat, regardless of color.  These polymers were partially inspired by stained-glass windows. “I have an optical background, so I’m always drawn to the visual aspects of energy applications,” says Boriskina. “The problem is, when you introduce color it affects whatever energy strategy you are trying to pursue.” Using a mix of polyethylene and a solvent, Chen and Boriskina added various nanoparticles to provide color. Once stretched, the material becomes translucent and its composition changes. Previously disorganized carbon chains reform as parallel lines, which are much better at conducting heat.While these polymers need further development for use in transparent windows, they could possibly be used in colorful, translucent windows that reflect or trap heat, ultimately leading to energy savings. “The material isn’t as transparent as glass, but it’s translucent. It could be useful for windows in places you don’t want direct sunlight to enter — like gyms or classrooms,” Boriskina adds. Boriskina is also using these materials for military applications. Through a three-year project funded by the U.S. Army, she is developing lightweight, custom-colored, and unbreakable polymer windows. These windows can provide passive temperature control and camouflage for portable shelters and vehicles. For any of these technologies to have a meaningful impact on energy consumption, researchers must improve scalability and affordability. “Right now, the cost barrier for these technologies is too high — we need to look into more economical and scalable versions,” Fang adds.  If researchers are successful in developing manufacturable and affordable solutions, their window technologies could vastly improve building efficiency and lead to a substantial reduction in building energy consumption worldwide. A smart window developed by Professor Nicholas Fang includes thermochromic material that turns frosty when exposed to temperatures of 32 C or higher, such as when a researcher touches the window with her hand. Photo courtesy of the researchers. More

  • in

    Assessing the value of battery energy storage in future power grids

    In the transition to a decarbonized electric power system, variable renewable energy (VRE) resources such as wind and solar photovoltaics play a vital role due to their availability, scalability, and affordability. However, the degree to which VRE resources can be successfully deployed to decarbonize the electric power system hinges on the future availability and cost of energy storage technologies.
    In a paper recently published in Applied Energy, researchers from MIT and Princeton University examine battery storage to determine the key drivers that impact its economic value, how that value might change with increasing deployment over time, and the implications for the long-term cost-effectiveness of storage.
    “Battery storage helps make better use of electricity system assets, including wind and solar farms, natural gas power plants, and transmission lines, and that can defer or eliminate unnecessary investment in these capital-intensive assets,” says Dharik Mallapragada, the paper’s lead author. “Our paper demonstrates that this ‘capacity deferral,’ or substitution of batteries for generation or transmission capacity, is the primary source of storage value.”
    Other sources of storage value include providing operating reserves to electricity system operators, avoiding fuel cost and wear and tear incurred by cycling on and off gas-fired power plants, and shifting energy from low price periods to high value periods — but the paper showed that these sources are secondary in importance to value from avoiding capacity investments.
    For their study, the researchers — Mallapragada, a research scientist at the MIT Energy Initiative; Nestor Sepulveda SM’16, PhD ’20, a postdoc at MIT who was a MITEI researcher and nuclear science and engineering student at the time of the study; and fellow former MITEI researcher Jesse Jenkins SM ’14, PhD ’18, an assistant professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment at Princeton University — use a capacity expansion model called GenX to find the least expensive ways of integrating battery storage in a hypothetical low-carbon power system. They studied the role for storage for two variants of the power system, populated with load and VRE availability profiles consistent with the U.S. Northeast (North) and Texas (South) regions. The paper found that in both regions, the value of battery energy storage generally declines with increasing storage penetration.
    “As more and more storage is deployed, the value of additional storage steadily falls,” explains Jenkins. “That creates a race between the declining cost of batteries and their declining value, and our paper demonstrates that the cost of batteries must continue to fall if storage is to play a major role in electricity systems.”
    The study’s key findings include:
    The economic value of storage rises as VRE generation provides an increasing share of the electricity supply.
    The economic value of storage declines as storage penetration increases, due to competition between storage resources for the same set of grid services.
    As storage penetration increases, most of its economic value is tied to its ability to displace the need for investing in both renewable and natural gas-based energy generation and transmission capacity.
    Without further cost reductions, a relatively small magnitude (4 percent of peak demand) of short-duration (energy capacity of two to four hours of operation at peak power) storage is cost-effective in grids with 50-60 percent of electricity supply that comes from VRE generation. “The picture is more favorable to storage adoption if future cost projections ($150 per kilowatt-hour for four-hour storage) are realized,” notes Mallapragada.
    Relevance to policymakers
    The results of the study highlight the importance of reforming electricity market structures or contracting practices to enable storage developers to monetize the value from substituting generation and transmission capacity — a central component of their economic viability.
    “In practice, there are few direct markets to monetize the capacity substitution value that is provided by storage,” says Mallapragada. “Depending on their administrative design and market rules, capacity markets may or may not adequately compensate storage for providing energy during peak load periods.”
    In addition, Mallapragada notes that developers and integrated utilities in regulated markets can implicitly capture capacity substitution value through integrated development of wind, solar, and energy storage projects. Recent project announcements support the observation that this may be a preferred method for capturing storage value.
    Implications for the low-carbon energy transition
    The economic value of energy storage is closely tied to other major trends impacting today’s power system, most notably the increasing penetration of wind and solar generation. However, in some cases, the continued decline of wind and solar costs could negatively impact storage value, which could create pressure to reduce storage costs in order to remain cost-effective. 
    “It is a common perception that battery storage and wind and solar power are complementary,” says Sepulveda. “Our results show that is true, and that all else equal, more solar and wind means greater storage value. That said, as wind and solar get cheaper over time, that can reduce the value storage derives from lowering renewable energy curtailment and avoiding wind and solar capacity investments. Given the long-term cost declines projected for wind and solar, I think this is an important consideration for storage technology developers.” 
    The relationship between wind and solar cost and storage value is even more complex, the study found.
    “Since storage derives much of its value from capacity deferral, going into this research, my expectation was that the cheaper wind and solar gets, the lower the value of energy storage will become, but our paper shows that is not always the case,” explains Mallapragada. “There are some scenarios where other factors that contribute to storage value, such as increases in transmission capacity deferral, outweigh the reduction in wind and solar deferral value, resulting in higher overall storage value.”
    Battery storage is increasingly competing with natural gas-fired power plants to provide reliable capacity for peak demand periods, but the researchers also find that adding 1 megawatt (MW) of storage power capacity displaces less than 1 MW of natural gas generation. The reason: To shut down 1 MW of gas capacity, storage must not only provide 1 MW of power output, but also be capable of sustaining production for as many hours in a row as the gas capacity operates. That means you need many hours of energy storage capacity (megawatt-hours) as well. The study also finds that this capacity substitution ratio declines as storage tries to displace more gas capacity.
    “The first gas plant knocked offline by storage may only run for a couple of hours, one or two times per year,” explains Jenkins. “But the 10th or 20th gas plant might run 12 or 16 hours at a stretch, and that requires deploying a large energy storage capacity for batteries to reliably replace gas capacity.”
    Given the importance of energy storage duration to gas capacity substitution, the study finds that longer storage durations (the amount of hours storage can operate at peak capacity) of eight hours generally have greater marginal gas displacement than storage with two hours of duration. However, the additional system value from longer durations does not outweigh the additional cost of the storage capacity, the study finds. 
    “From the perspective of power system decarbonization, this suggests the need to develop cheaper energy storage technologies that can be cost-effectively deployed for much longer durations, in order to displace dispatchable fossil fuel generation,” says Mallapragada.
    To address this need, the team is preparing to publish a followup paper that provides the most extensive evaluation of the potential role and value of long-duration energy storage technologies to date.
    “We are developing novel insights that can guide the development of a variety of different long-duration energy storage technologies and help academics, private-sector companies and investors, and public policy stakeholders understand the role of these technologies in a low-carbon future,” says Sepulveda.
    This research was supported by General Electric through the MIT Energy Initiative’s Electric Power Systems Low-Carbon Energy Center.  More

  • in

    MIT researchers and Wyoming representatives explore energy and climate solutions

    The following is a joint release from the MIT Environmental Solutions Initiative and the office of Wyoming Governor Mark Gordon.
    The State of Wyoming supplies 40 percent of the country’s coal used to power electric grids. The production of coal and other energy resources contributes over half of the state’s revenue, funding the government and many of the social services — including K-12 education — that residents rely on. With the consumption of coal in a long-term decline, decreased revenues from oil and natural gas, and growing concerns about carbon dioxide (CO2) emissions, the state is actively looking at how to adapt to a changing marketplace.
    Recently, representatives from the Wyoming Governor’s Office, University of Wyoming School of Energy Resources, and Wyoming Energy Authority met with faculty and researchers from MIT in a virtual, two-day discussion to discuss avenues for the state to strengthen its energy economy while lowering CO2 emissions.
    “This moment in time presents us with an opportunity to seize: creating a strong economic future for the people of Wyoming while protecting something we all care about — the climate,” says Wyoming Governor Mark Gordon. “Wyoming has tremendous natural resources that create thousands of high-paying jobs. This conversation with MIT allows us to consider how we use our strengths and adapt to the changes that are happening nationally and globally.”
    The two dozen participants from Wyoming and MIT discussed pathways for long-term economic growth in Wyoming, given the global need to reduce carbon dioxide emissions. The wide-ranging and detailed conversation covered topics such as the future of carbon capture technology, hydrogen, and renewable energy; using coal for materials and advanced manufacturing; climate policy; and how communities can adapt and thrive in a changing energy marketplace.
    The discussion paired MIT’s global leadership in technology development, economic modeling, and low-carbon energy research with Wyoming’s unique competitive advantages: its geology that provides vast underground storage potential for CO2; its existing energy and pipeline infrastructure; and the tight bonds between business, government, and academia.
    “Wyoming’s small population and statewide support of energy technology development is an advantage,” says Holly Krutka, executive director of the University of Wyoming’s School of Energy Resources. “Government, academia, and industry work very closely together here to scale up technologies that will benefit the state and beyond. We know each other, so we can get things done and get them done quickly.”
    “There’s strong potential for MIT to work with the State of Wyoming on technologies that could not only benefit the state, but also the country and rest of the world as we combat the urgent crisis of climate change,” says Bob Armstrong, director of the MIT Energy Initiative, who attended the forum. “It’s a very exciting conversation.”
    The event was convened by the MIT Environmental Solutions Initiative as part of its Here & Real project, which works with regions in the United States to help further initiatives that are both climate-friendly and economically just.
    “At MIT, we are focusing our attention on technologies that combat the challenge of climate change — but also, with an eye toward not leaving people behind,” says Maria Zuber, MIT’s vice president for research and the E. A. Griswold Professor of Geophysics.
    “It is inspiring to see Wyoming’s state leadership seriously committed to finding solutions for adapting the energy industry, given what we know about the risks of climate change,” says Laur Hesse Fisher, director of the Here & Real project. “Their determination to build an economically and environmentally sound future for the people of Wyoming has been evident in our discussions, and I am excited to see this conversation continue and deepen.”

    Topics: MIT Energy Initiative, Climate change, Policy, Collaboration, Energy, Industry, Government, Sustainability, Emissions, Air pollution, Manufacturing, Carbon dioxide, ESI More