in

MIT News – Environment | Climate

MIT News – Environment | Climate | Climate changeTwo projects receive funding for technologies that avoid carbon emissionsMobility Systems Center awards four projects for low-carbon transportation researchMIT researchers and Wyoming representatives explore energy and climate solutions3 Questions: Asegun Henry on five “grand thermal challenges” to stem the tide of global warmingMIT Energy Conference goes virtualShrinking deep learning’s carbon footprintShrinking deep learning’s carbon footprintWhen the chemical industry met modern architectureStudy: A plunge in incoming sunlight may have triggered “Snowball Earths”Study: A plunge in incoming sunlight may have triggered “Snowball Earths”Study: A plunge in incoming sunlight may have triggered “Snowball Earths”$25 million gift launches ambitious new effort tackling poverty and climate changeLetter from President Reif: Tackling the grand challenges of climate changeCovid-19 shutdown led to increased solar power outputBuilding a more sustainable MIT — from homeDecarbonize and diversifyDecarbonize and diversifyA new approach to carbon captureA new approach to carbon captureInnovations in environmental training for the mining industryD-Lab moves online, without compromising on impactIdeaStream 2020 goes virtualResearchers find benefits of solar photovoltaics outweigh costsResearchers find benefits of solar photovoltaics outweigh costsIce, ice, maybeWhy the Mediterranean is a climate change hotspotWhat moves people?Startup with MIT roots develops lightweight solar panelsStartup with MIT roots develops lightweight solar panelsTiny sand grains trigger massive glacial surgesTiny sand grains trigger massive glacial surgesUnlocking the secrets of a plastic-eaterPeatland drainage in Southeast Asia adds to climate changePeatland drainage in Southeast Asia adds to climate changeStudy: Reflecting sunlight to cool the planet will cause other global changesStudy: Reflecting sunlight to cool the planet will cause other global changesStudy: Reflecting sunlight to cool the planet will cause other global changesMachine learning helps map global ocean communitiesMachine learning helps map global ocean communitiesMaking nuclear energy cost-competitiveSolve at MIT builds partnerships to tackle complex challenges during Covid-19 crisisSolar energy farms could offer second life for electric vehicle batteriesTransportation policymaking in Chinese citiesTransportation policymaking in Chinese citiesThe quest for practical fusion energy sourcesTowable sensor free-falls to measure vertical slices of ocean conditionsA scientist turns to entrepreneurshipQ&A: Energy studies at MIT and the next generation of energy leadersQ&A: Energy studies at MIT and the next generation of energy leadersMelting glaciers cool the Southern Ocean

https://news.mit.edu/rss/topic/environment MIT news feed about: Environment | Climate | Climate change en Thu, 20 Aug 2020 18:00:00 +0000 https://news.mit.edu/2020/two-research-projects-receive-funding-advance-technologies-avoid-carbon-emissions-0820 Asegun Henry, Paul Barton, and Matěj Peč will lead research supported by the MIT Energy Initiative's Carbon Capture, Utilization, and Storage Center. Thu, 20 Aug 2020 14:00:00 -0400 https://news.mit.edu/2020/two-research-projects-receive-funding-advance-technologies-avoid-carbon-emissions-0820 Emily Dahl | MIT Energy Initiative <p>The <a href=”http://energy.mit.edu/ccus/” target=”_blank”>Carbon Capture, Utilization, and Storage Center</a>, one of the MIT Energy Initiative (MITEI)’s <a href=”http://energy.mit.edu/lcec/”>Low-Carbon Energy Centers</a>, has awarded $900,000 in funding to two new research projects to advance technologies that avoid carbon dioxide (CO<sub>2</sub>) emissions into the atmosphere and help address climate change. The winning project is receiving $750,000, and an additional project receives $150,000.</p> <p>The winning project, led by principal investigator Asegun Henry, the Robert N. Noyce Career Development Professor in the Department of Mechanical Engineering, and co-principal investigator Paul Barton, the Lammot du Pont Professor of Chemical Engineering, aims to produce hydrogen without CO<sub>2</sub> emissions while creating a second revenue stream of solid carbon. The additional project, led by principal investigator Matěj Peč, the Victor P. Starr Career Development Chair in the Department of Earth, Atmospheric and Planetary Sciences, seeks to expand understanding of new processes for storing CO<sub>2</sub> in basaltic rocks by converting it from an aqueous solution into carbonate minerals.</p> <p>Carbon capture, utilization, and storage (CCUS) technologies have the potential to play an important role in limiting or reducing the amount of CO<sub>2</sub> in the atmosphere, as part of a suite of approaches to mitigating to climate change that includes renewable energy and energy efficiency technologies, as well as policy measures. While some CCUS technologies are being deployed at the million-ton-of-CO<sub>2</sub> per year scale, there are substantial needs to improve costs and performance of those technologies and to advance more nascent technologies. MITEI’s CCUS center is working to meet these challenges with a cohort of industry members that are supporting promising MIT research, such as these newly funded projects.</p> <p><strong>A new process for producing hydrogen without CO<sub>2</sub> emissions</strong></p> <p>Henry and Barton’s project, “Lower cost, CO<sub>2</sub>-free, H<sub>2</sub> production from CH<sub>4</sub> using liquid tin,” investigates the use of methane pyrolysis instead of steam methane reforming (SMR) for hydrogen production.</p> <p>Currently, hydrogen production accounts for approximately 1 percent of global CO<sub>2</sub> emissions, and the predominant production method is SMR. The SMR process relies on the formation of CO<sub>2</sub>, so replacing it with another economically competitive approach to making hydrogen would avoid emissions.&nbsp;</p> <p>“Hydrogen is essential to modern life, as it is primarily used to make ammonia for fertilizer, which plays an indispensable role in feeding the world’s 7.5 billion people,” says Henry. “But we need to be able to feed a growing population and take advantage of hydrogen’s potential as a carbon-free fuel source by eliminating CO<sub>2</sub> emissions from hydrogen production. Our process results in a solid carbon byproduct, rather than CO<sub>2</sub> gas. The sale of the solid carbon lowers the minimum price at which hydrogen can be sold to break even with the current, CO<sub>2</sub> emissions-intensive process.”</p> <p>Henry and Barton’s work is a new take on an existing process, pyrolysis of methane. Like SMR, methane pyrolysis uses methane as the source of hydrogen, but follows a different pathway. SMR uses the oxygen in water to liberate the hydrogen by preferentially bonding oxygen to the carbon in methane, producing CO<sub>2</sub> gas in the process. In methane pyrolysis, the methane is heated to such a high temperature that the molecule itself becomes unstable and decomposes into hydrogen gas and solid carbon — a much more valuable byproduct than CO<sub>2</sub> gas. Although the idea of methane pyrolysis has existed for many years, it has been difficult to commercialize because of the formation of the solid byproduct, which can deposit on the walls of the reactor, eventually plugging it up. This issue makes the process impractical. Henry and Barton’s project uses a new approach in which the reaction is facilitated with inert molten tin, which prevents the plugging from occurring. The proposed approach is enabled by recent advances in Henry’s lab that enable the flow and containment of liquid metal at extreme temperatures without leakage or material degradation.&nbsp;</p> <p><strong>Studying CO<sub>2</sub> storage in basaltic reservoirs</strong></p> <p>With his project, “High-fidelity monitoring for carbon sequestration: integrated geophysical and geochemical investigation of field and laboratory data,” Peč plans to conduct a comprehensive study to gain a holistic understanding of the coupled chemo-mechanical processes that accompany CO<sub>2</sub> storage in basaltic reservoirs, with hopes of increasing adoption of this technology.</p> <p>The Intergovernmental Panel on Climate Change <a href=”https://report.ipcc.ch/sr15/pdf/sr15_spm_final.pdf” target=”_blank”>estimates</a> that 100 to 1,000 gigatonnes of CO<sub>2</sub> must be removed from the atmosphere by the end of the century. Such large volumes can only be stored below the Earth’s surface, and that storage must be accomplished safely and securely, without allowing any leakage back into the atmosphere.</p> <p>One promising storage strategy is CO<sub>2</sub>&nbsp;mineralization — specifically by dissolving gaseous CO<sub>2</sub>&nbsp;in water, which then reacts with reservoir rocks to form carbonate minerals. Of the technologies proposed for carbon sequestration, this approach is unique in that the sequestration is permanent: the CO<sub>2</sub>&nbsp;becomes part of an inert solid, so it cannot escape back into the environment. Basaltic rocks, the most common volcanic rock on Earth, present good sites for CO<sub>2</sub> injection due to their widespread&nbsp;occurrence&nbsp;and high concentrations of&nbsp;divalent cations such as calcium and magnesium that can form carbonate minerals.&nbsp;In one study, more than 95 percent of the CO<sub>2</sub>&nbsp;injected into a pilot site in Iceland was precipitated as carbonate minerals in less than two years.</p> <p>However, ensuring the subsurface integrity of geological formations during fluid injection and accurately evaluating the reaction rates in such reservoirs require targeted studies such as Peč’s.</p> <p>“The funding by MITEI’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage allows me to start a new research direction, bringing together a group of experts from a range of disciplines to tackle climate change, perhaps the greatest scientific challenge our generation is facing,” says Peč.</p> <p>The two projects were selected from a call for proposals that resulted in 15 entries by MIT researchers. “The application process revealed a great deal of interest from MIT researchers in advancing carbon capture, utilization, and storage processes and technologies,” says Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences, who co-directs the CCUS center with T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering. “The two projects funded through the center will result in fundamental, higher-risk research exploring novel approaches that have the potential to have high impact in the longer term. Given the short-term focus of the industry, projects like this might not have otherwise been funded, so having support for this kind of early-stage fundamental research is crucial.”</p> Postdoc Tiange Xing conducts an experiment in the Peč Lab related to the group’s newly funded project to expand understanding of new processes for storing CO2 in basaltic rocks by converting it from an aqueous solution into carbonate minerals. Photo courtesy of the Peč Lab. https://news.mit.edu/2020/mitei-mobility-systems-center-awards-four-new-projects-low-carbon-transportation-research-0818 Topics include Covid-19 and urban mobility, strategies for electric vehicle charging networks, and infrastructure and economics for hydrogen-fueled transportation. Tue, 18 Aug 2020 14:00:00 -0400 https://news.mit.edu/2020/mitei-mobility-systems-center-awards-four-new-projects-low-carbon-transportation-research-0818 Turner Jackson | MIT Energy Initiative <p>The <a href=”http://energy.mit.edu/msc/”>Mobility Systems Center</a> (MSC), one of the MIT Energy Initiative (MITEI)’s <a href=”http://energy.mit.edu/lcec/”>Low-Carbon Energy Centers</a>, will fund four new research projects that will allow for deeper insights into achieving a decarbonized transportation sector.</p> <p>”Based on input from&nbsp;our Mobility Systems Center members, we have selected an excellent and diverse set of projects to initiate this summer,” says <a href=”http://energy.mit.edu/profile/randall-field/”>Randall Field</a>, the center’s executive director. “The awarded projects will address a variety of pressing topics including the impacts of Covid-19 on urban mobility, strategies for electric vehicle charging networks, and infrastructure and economics for hydrogen-fueled transportation.” The projects are spearheaded by faculty and researchers from across the Institute, with experts in several fields including economics, urban planning, and energy systems.</p> <p>In addition to pursuing new avenues of research, the Mobility Systems Center also welcomes <a href=”http://energy.mit.edu/profile/jinhua-zhao/”>Jinhua Zhao</a> as co-director. Zhao serves alongside Professor <a href=”http://energy.mit.edu/profile/william-green/”>William H. Green,</a> the Hoyt C. Hottel Professor in Chemical Engineering. Zhao is an associate professor in the Department of Urban Studies and Planning and the director of the <a href=”http://mobility.mit.edu/”>JTL Urban Mobility Lab</a>. He succeeds <a href=”http://energy.mit.edu/profile/sanjay-sarma/”>Sanjay Sarma</a>, the vice president for open learning and the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering.</p> <p>“Jinhua already has a strong relationship with mobility research at MITEI, having been a major contributor to MITEI’s Mobility of the Future study and serving as a principal investigator for MSC projects. He will provide excellent leadership to the center,” says MITEI Director <a href=”http://energy.mit.edu/profile/robert-armstrong/”>Robert C. Armstrong</a>, the Chevron Professor of Chemical Engineering. “We also thank Sanjay for his valuable leadership during the MSC’s inaugural year, and look forward to collaborating with him in his role as vice president for open learning — an area that is vitally important in MIT’s response to research and education in the Covid-19 era.”</p> <p><strong>The impacts of Covid-19 on urban mobility</strong></p> <p>The Covid-19 pandemic has transformed all aspects of life in a remarkably short amount of time, including how, when, and why people travel. In addition to becoming the center’s new co-director, Zhao will lead one of the MSC’s new projects to identify how Covid-19 has impacted use of, preferences toward, and energy consumption of different modes of urban transportation, including driving, walking, cycling, and most dramatically, ridesharing services and public transit.</p> <p>Zhao describes four primary objectives for the project. The first is to quantify large-scale behavioral and preference changes in response to the pandemic, tracking how these change from the beginning of the outbreak through the medium-term recovery period. Next, the project will break down these changes by sociodemographic groups, with a particular emphasis on low-income and marginalized communities.</p> <p>The project will then use these insights to posit how changes to infrastructure, equipment, and policies could help shape travel recovery to be more sustainable and equitable. Finally, Zhao and his research team will translate these behavioral changes into energy consumption and carbon dioxide emissions estimates.</p> <p>“We make two distinctions: first, between impacts on amount of travel (e.g., number of trips) and&nbsp;impacts on type of travel (e.g., mixture of different travel modes); and second, between temporary shocks and longer-term structural changes,” says Zhao. “Even when the coronavirus is no longer a threat to public health, we expect to see lasting effects on activity, destination, and mode preferences. These changes, in turn, affect energy consumption and emissions from the transportation sector.”</p> <p><strong>The economics of electric vehicle charging</strong></p> <p>In the transition toward a low-carbon transportation system, refueling infrastructure is crucial for the viability of any alternative fuel vehicle. Jing Li, an assistant professor in the MIT Sloan School of Management, aims to develop a model of consumer vehicle and travel choices based on data regarding travel patterns, electric vehicle (EV) charging demand, and EV adoption.</p> <p>Li’s research team will implement a two-pronged approach. First, they will quantify the value that each charging location provides to the rest of the refueling network, which may be greater than that location’s individual profitability due to network spillovers. Second, they will simulate the profits of EV charging networks and the adoption rates of EVs using different pricing and location strategies.</p> <p>“We hypothesize that some charging locations may not be privately profitable, but would be socially valuable. If so, then a charging network may increase profits by subsidizing entry at ‘missing’ locations that are underprovided by the market,” she says. If proven correct, this research could be valuable in making EVs accessible to broader portions of the population.&nbsp;</p> <p><strong>Cost reduction and emissions savings strategies for hydrogen mobility systems</strong></p> <p>Hydrogen-based transportation and other energy services have long been discussed, but what role will they play in a clean energy transition? <a href=”http://energy.mit.edu/profile/jessika-trancik/”>Jessika Trancik</a>, an associate professor of energy studies in the Institute for Data, Systems, and Society, will examine and identify cost-reducing and emissions-saving mechanisms for hydrogen-fueled mobility services. She plans to analyze production and distribution scenarios, evolving technology costs, and the lifecycle greenhouse gas emissions of hydrogen-based mobility systems, considering both travel activity patterns and fluctuations in the primary energy supply for hydrogen production.</p> <p>“Modeling the mechanisms through which the design of hydrogen-based mobility systems can achieve lower costs and emissions can help inform the development of future infrastructure,” says Trancik. “Models and theory to inform this development can have a significant impact on whether or not hydrogen-based systems succeed in contributing measurably to the decarbonization of the transportation sector.</p> <p>The goals for the project are threefold: quantifying the emissions and costs of hydrogen production and storage pathways, with a focus on the potential use of excess renewable energy; modeling costs and requirements of the distribution and refueling infrastructure for different forms of transportation, from personal vehicles to long-haul trucking based on existing and projected demand; and modeling the costs and emissions associated with the use of hydrogen-fueled mobility services.</p> <p><strong>Analysis of forms of hydrogen for use in transportation</strong></p> <p>MITEI research scientist <a href=”http://energy.mit.edu/profile/emre-gencer/”>Emre Gençer</a> will lead a team including <a href=”http://energy.mit.edu/profile/yang-shao-horn/”>Yang Shao-Horn</a>, the W.M. Keck Professor of Energy in the Department of Materials Science and Engineering, and <a href=”http://energy.mit.edu/profile/dharik-mallapragada/”>Dharik Mallapragada</a>, a MITEI research scientist, to assess the alternative forms of hydrogen that could serve the transportation sector. This project will develop an end-to-end techno-economic and greenhouse gas emissions analysis of hydrogen-based energy supply chains for road transportation.</p> <p>The analysis will focus on two classes of supply chains: pure hydrogen (transported as a compressed gas or cryogenic liquid) and cyclic supply chains (based on liquid organic hydrogen carriers for powering on-road transportation). The low energy density of gaseous hydrogen is currently a barrier to the large-scale deployment of hydrogen-based transportation; liquid carriers are a potential solution in enabling an energy-dense means for storing and delivering hydrogen fuel. The scope of the analysis will include the generation, storage, distribution, and use of hydrogen, as well as the carrier molecules that are used in the supply chain. Additionally, the researchers will estimate the economic and environmental performance of various technology options across the entire supply chain.</p> <p>“Hydrogen has long been discussed as a fuel of the future,” says Shao-Horn. “As the energy transition progresses, opportunities for carbon-free fuels will only grow throughout the energy sector. Thorough analyses of hydrogen-based technologies are vital for providing information necessary to a greener transportation and energy system.”</p> <p><strong>Broadening MITEI’s mobility research portfolio</strong></p> <p>The mobility sector needs a multipronged approach to mitigate its increasing environmental impact. The four new projects will complement the MSC’s current portfolio of research projects, which includes an evaluation of operational designs for highly responsive urban last-mile delivery services; a techno-economic assessment of options surrounding long-haul road freight; an investigation of tradeoffs between data privacy and performance in shared mobility services; and an examination of mobility-as-a-service and its implications for private car ownership in U.S. cities.&nbsp;</p> <p>“The pressures to adapt our transportation systems have never been greater with the Covid-19 crisis and increasing environmental concerns. While new technologies, business models, and governmental policies present opportunities to advance, research is needed to understand how they interact with one another and help to shape our mobility patterns,” says Field. “We are very excited to have such a strong breadth of projects to contribute multidisciplinary insights into the evolution of a cleaner, more sustainable mobility future.”</p> The MIT Energy Initiative’s Mobility Systems Center has selected four new low-carbon transportation research projects to add to its growing portfolio. Photo: Benjamin Cruz https://news.mit.edu/2020/mit-researchers-wyoming-representatives-explore-energy-climate-solutions-0811 Members of Wyoming’s government and public university met with MIT researchers to discuss climate-friendly economic growth. Tue, 11 Aug 2020 00:00:00 -0400 https://news.mit.edu/2020/mit-researchers-wyoming-representatives-explore-energy-climate-solutions-0811 Environmental Solutions Initiative <p><em>The following is a joint release from the MIT Environmental Solutions Initiative and the office of Wyoming Governor Mark Gordon.</em></p> <p>The State of Wyoming supplies 40 percent of the country’s coal used to power electric grids. The production of coal and other energy resources contributes over half of the state’s revenue, funding the government and many of the social services — including K-12 education — that residents rely on. With the consumption of coal in a long-term decline, decreased revenues from oil and natural gas, and growing concerns about carbon dioxide (CO<sub>2</sub>) emissions, the state is actively looking at how to adapt to a changing marketplace.</p> <p>Recently, representatives from the Wyoming Governor’s Office, University of Wyoming School of Energy Resources, and Wyoming Energy Authority met with faculty and researchers from MIT in a virtual, two-day discussion to discuss avenues for the state to strengthen its energy economy while lowering CO<sub>2</sub>&nbsp;emissions.</p> <p>“This moment in time presents us with an opportunity to seize: creating a strong economic future for the people of Wyoming while protecting something we all care about — the climate,” says Wyoming Governor Mark Gordon. “Wyoming has tremendous natural resources that create thousands of high-paying jobs. This conversation with MIT allows us to consider how we use our strengths and adapt to the changes that are happening nationally and globally.”</p> <p>The two dozen participants from Wyoming and MIT discussed pathways for long-term economic growth in Wyoming, given the global need to reduce carbon dioxide emissions. The wide-ranging and detailed conversation covered topics such as the future of carbon capture technology, hydrogen, and renewable energy; using coal for materials and advanced manufacturing; climate policy; and how communities can adapt and thrive in a changing energy marketplace.</p> <p>The discussion paired MIT’s global leadership in technology development, economic modeling, and low-carbon energy research with Wyoming’s unique competitive advantages: its geology that provides vast underground storage potential for CO<sub>2</sub>; its existing energy and pipeline infrastructure; and the tight bonds between business, government, and academia.</p> <p>“Wyoming’s small population and statewide support of energy technology development is an advantage,” says Holly Krutka, executive director of the University of Wyoming’s School of Energy Resources. “Government, academia, and industry work very closely together here to scale up technologies that will benefit the state and beyond. We know each other, so we can get things done and get them done quickly.”</p> <p>“There’s strong potential for MIT to work with the State of Wyoming on technologies that could not only benefit the state, but also the country and rest of the world as we combat the urgent crisis of climate change,” says Bob Armstrong, director of the MIT Energy Initiative, who attended the forum. “It’s a very exciting conversation.”</p> <p>The event was convened by the MIT Environmental Solutions Initiative as part of its Here &amp; Real project, which works with regions in the United States to help further initiatives that are both climate-friendly and economically just.</p> <p>“At MIT, we are focusing our attention on technologies that combat the challenge of climate change — but also, with an eye toward not leaving people behind,” says Maria Zuber, MIT’s vice president for research and the E. A. Griswold Professor of Geophysics.</p> <p>“It is inspiring to see Wyoming’s state leadership seriously committed to finding solutions for adapting the energy industry, given what we know about the risks of climate change,” says Laur Hesse Fisher, director of the Here &amp; Real project. “Their determination to build an economically and environmentally sound future for the people of Wyoming has been evident in our discussions, and I am excited to see this conversation continue and deepen.”</p> The Wyoming State Capitol in Cheyenne https://news.mit.edu/2020/asegun-henry-thermal-challenges-global-warming-0810 “Our mission here is to save humanity from extinction due to climate change,” says MIT professor. Mon, 10 Aug 2020 11:00:00 -0400 https://news.mit.edu/2020/asegun-henry-thermal-challenges-global-warming-0810 Jennifer Chu | MIT News Office <p><em>More than 90 percent of the world’s energy use today involves heat, whether for producing electricity, heating and cooling buildings and vehicles, manufacturing steel and cement, or other industrial activities. Collectively, these processes emit a staggering amount of greenhouse gases into the environment each year. </em></p> <p><em>Reinventing the way we transport, store, convert, and use thermal energy would go a long way toward avoiding a global rise in temperature of more than 2 degrees Celsius — a critical increase that is predicted to tip the planet into a cascade of catastrophic climate scenarios. </em></p> <p><em>But, as three thermal energy experts write in a letter published today in Nature Energy, “Even though this critical need exists, there is a significant disconnect between current research in thermal sciences and what is needed for deep decarbonization.”</em></p> <p><em>In an effort to motivate the scientific community to work on climate-critical thermal issues, the authors have laid out five thermal energy “grand challenges,” or broad areas where significant innovations need to be made in order to stem the rise of global warming. MIT News spoke with Asegun Henry, the lead author and the Robert N. Noyce Career Development Associate Professor in the Department of Mechanical Engineering, about this grand vision.</em></p> <p><strong>Q: </strong>Before we get into the specifics of the five challenges you lay out, can you say a little about how this paper came about, and why you see it as a call to action?</p> <p><strong>A:</strong> This paper was born out of this really interesting meeting, where my two co-authors and I were asked to meet with Bill Gates and teach him about thermal energy. We did a several-hour session with him in October of 2018, and when we were leaving, at the airport, we all agreed that the message we shared with Bill needs to be spread much more broadly.</p> <p>This particular paper is about thermal science and engineering specifically, but it’s an interdisciplinary field with lots of intersections. The way we frame it, this paper is about five grand challenges that if solved, would literally alter the course of humanity. It’s a big claim — but we back it up.</p> <p>And we really need this to be declared as a mission, similar to the declaration that we were going to put a man on the moon, where you saw this concerted effort among the scientific community to achieve that mission. Our mission here is to save humanity from extinction due to climate change. The mission is clear. And this is a subset of five problems that will get us the majority of the way there, if we can solve them. Time is running out, and we need all hands on deck.&nbsp;</p> <p><strong>Q: </strong>What are the five thermal energy challenges you outline in your paper?</p> <p><strong>A: </strong>The first challenge is developing thermal storage systems for the power grid, electric vehicles, and buildings. Take the power grid: There is an international race going on to develop a grid storage system to store excess electricity from renewables so you can use it at a later time. This would allow renewable energy to penetrate the grid. If we can get to a place of fully decarbonizing the grid, that alone reduces carbon dioxide emissions from electricity production by 25 percent. And the beauty of that is, once you decarbonize the grid you open up decarbonizing the transportation sector with electric vehicles. Then you’re talking about a 40 percent reduction of global carbon emissions.</p> <p>The second challenge is decarbonizing industrial processes, which contribute 15 percent of global carbon dioxide emissions. The big actors here are cement, steel, aluminum, and hydrogen. Some of these industrial processes intrinsically involve the emission of carbon dioxide, because the reaction itself has to release carbon dioxide for it to work, in the current form. The question is, is there another way? Either we think of another way to make cement, or come up with something different. It’s an extremely difficult challenge, but there are good ideas out there, and we need way more people thinking about this.</p> <p>The third challenge is solving the cooling problem. Air conditioners and refrigerators have chemicals in them that are very harmful to the environment, 2,000 times more harmful than carbon dioxide on a molar basis. If the seal breaks and that refrigerant gets out, that little bit of leakage will cause global warming to shift significantly. When you account for India and other developing nations that are now getting access to electricity infrastructures to run AC systems, the leakage of these refrigerants will become responsible for 15 to 20 percent of global warming by 2050.</p> <p>The fourth challenge is long-distance transmission of heat. We transmit electricity because it can be transmitted with low loss, and it’s cheap. The question is, can we transmit heat like we transmit electricity? There is an overabundance of waste heat available at power plants, and the problem is, where the power plants are and where people live are two different places, and we don’t have a connector to deliver heat from these power plants, which is literally wasted. You could satisfy the entire residential heating load of the world with a fraction of that waste heat. What we don’t have is the wire to connect them. And the question is, can someone create one?</p> <p>The last challenge is variable conductance building envelopes. There are some demonstrations that show it is physically possible to create a thermal material, or a device that will change its conductance, so that when it’s hot, it can block heat from getting through a wall, but when you want it to, you could change its conductance to let the heat in or out. We’re far away from having a functioning system, but the foundation is there.</p> <p><strong>Q: </strong>You say that these five challenges represent a new mission for the scientific community, similar to the mission to land a human on the moon, which came with a clear deadline. What sort of timetable are we talking about here, in terms of needing to solve these five thermal problems to mitigate climate change?</p> <p><strong>A:</strong> In short, we have about 20 to 30 years of business as usual, before we end up on an inescapable path to an average global temperature rise of over 2 degrees Celsius. This may seem like a long time, but it’s not when you consider that it took natural gas 70 years to become 20 percent of our energy mix. So imagine that now we have to not just switch fuels, but do a complete overhaul of the entire energy infrastructure in less than one third the time. We need dramatic change, not yesterday, but years ago. So every day I fear we will do too little too late, and we as a species may not survive Mother Earth’s clapback.</p> MIT’s Asegun Henry on tackling five “grand thermal challenges” to stem the global warming tide: “Our mission here is to save humanity from extinction due to climate change.” Portrait photo courtesy of MIT MechE. https://news.mit.edu/2020/mit-energy-conference-goes-virtual-0807 Annual student-run energy conference pivots to successful online event with short notice in response to the coronavirus. Fri, 07 Aug 2020 16:55:00 -0400 https://news.mit.edu/2020/mit-energy-conference-goes-virtual-0807 Turner Jackson | MIT Energy Initiative <p>For the past 14 years, the <a href=”https://www.mitenergyconference.org/”>MIT Energy Conference </a>— a two-day event organized by energy students — has united students, faculty, researchers, and industry representatives from around the world to discuss cutting-edge developments in energy.</p> <p>Under the supervision of Thomas “Trey” Wilder, an MBA candidate at the MIT Sloan School of Management, and a large team of student event organizers, the final pieces for the 2020 conference were falling into place by early March — and then the Covid-19 pandemic hit the United States. As the Institute canceled in-person events to reduce the spread of the virus, much of the planning that had gone into hosting the conference in its initial format was upended.</p> <p>The Energy Conference team had less than a month to transition the entire event — scheduled for early April — online.</p> <p>During the conference’s opening remarks, Wilder recounted the month leading up to the event. “Coincidently, the same day that we received the official notice that all campus events were canceled, we had a general body Energy Club meeting,” says Wilder. “All the leaders looked at each other in disbelief — seeing a lot of the work that we had put in for almost a year now, seemingly go down the drain. We decided that night to retain whatever value we could find from this event.”</p> <p>The team immediately started contacting vendors and canceling orders, issuing refunds to guests, and informing panelists and speakers about the conference’s new format.</p> <p>“One of the biggest issues was getting buy-in from the speakers. Everyone was new to this virtual world back at the end of March. Our speakers didn’t know what this was going to look like, and many backed out,” says Wilder. The team worked hard to find new speakers, with one even being brought on 12 hours before the start of the event.</p> <p>Another challenge posed by taking the conference virtual was learning the ins and outs of running a Zoom webinar in a remarkably short time frame. “With the webinar, there are so many functions that the host controls that really affect the outcome of the event. Similarly, the speakers didn’t quite know how to operate it, either.”</p> <p>In spite of the multitude of challenges posed by switching to an online format on a tight deadline, this year’s coordinating team managed to pull off an incredibly informative and timely conference that reached a much larger audience than those in years past. This was the first year the conference was offered for free online, which allowed for over 3,500 people globally to tune in — a marked increase from the 500 attendees planned for the original, in-person event.</p> <p>Over the course of two days, panelists and speakers discussed a wide range of energy topics, including electric vehicles, energy policy, and the future of utilities. The three keynote speakers were Daniel M. Kammen, a professor of energy and the chair of the Goldman School of Public Policy at the University of California at Berkeley; Rachel Kyte, the dean of the Tufts Fletcher School of Law and Diplomacy; and <a href=”http://energy.mit.edu/profile/john-deutch/”>John Deutch</a>, the Institute Professor of Chemistry at MIT.</p> <p>Many speakers modified their presentations to address Covid-19 and how it relates to energy and the environment. For example, Kammen adjusted his address to cover what those who are working to address the climate emergency can learn from the Covid-19 pandemic. He emphasized the importance of individual actions for both the climate crisis and Covid-19; how global supply chains are vulnerable in a crowded, denuded planet; and how there is no substitute for thorough research and education when tackling these issues.</p> <p>Wilder credits the team of dedicated, hardworking energy students as the most important contributors to the conference’s success. A couple of notable examples include Joe Connelly, an MBA candidate, and Leah Ellis, a materials science and engineering postdoc, who together managed the Zoom operations during the conference. They ensured that the panels and presentations flowed seamlessly.</p> <p>Anna Sheppard, another MBA candidate, live-tweeted throughout the conference, managed the YouTube stream, and responded to emails during the event, with assistance from Michael Cheng, a graduate student in the Technology and Policy Program.</p> <p>Wilder says MBA candidate Pervez Agwan “was the Swiss Army knife of the group”; he worked on everything from marketing to tickets to operations — and, because he had a final exam on the first day of the conference, Agwan even pulled an all-nighter to ensure that the event and team were in good shape.</p> <p>“What I loved most about this team was that they were extremely humble and happy to do the dirty work,” Wilder says. “Everyone was content to put their head down and grind to make this event great. They did not desire praise or accolades, and are therefore worthy of both.”</p> The 2020 MIT Energy Conference organizers. Thomas “Trey” Wilder (bottom row, fourth from left), an MBA candidate at the MIT Sloan School of Management, spearheaded the organization of this year’s conference, which had less than a month to transition to a virtual event. Image: Trey Wilder https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence. Fri, 07 Aug 2020 17:00:00 -0400 https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Kim Martineau | MIT Quest for Intelligence <p>In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can&nbsp;<a href=”https://www.gwern.net/GPT-3″>write creative fiction</a>, translate&nbsp;<a href=”https://twitter.com/michaeltefula/status/1285505897108832257″>legalese into plain English</a>, and&nbsp;<a href=”https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html”>answer obscure trivia</a>&nbsp;questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.</p> <p>But it came at a hefty price: at least $4.6 million and&nbsp;<a href=”https://lambdalabs.com/blog/demystifying-gpt-3/”>355 years in computing time</a>, assuming the model&nbsp;was trained on a standard neural network chip, or GPU.&nbsp;The model’s colossal size — 1,000 times larger than&nbsp;<a href=”https://arxiv.org/pdf/1810.04805.pdf”>a typical</a>&nbsp;language model — is the main factor in&nbsp;its high cost.</p> <p>“You have to throw a lot more computation at something to get a little improvement in performance,” says&nbsp;<a href=”http://ide.mit.edu/about-us/people/neil-thompson”>Neil Thompson</a>, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”</p> <p>Some of the excitement over AI’s recent progress has shifted to alarm. In a&nbsp;<a href=”https://arxiv.org/abs/1906.02243″>study last year</a>, researchers at the University of Massachusetts at Amherst estimated that training&nbsp;a large deep-learning model&nbsp;produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency.&nbsp;Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.&nbsp;</p> <p>“We need to rethink the entire stack — from software to hardware,” says&nbsp;<a href=”http://olivalab.mit.edu/audeoliva.html”>Aude Oliva</a>, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence.&nbsp;“Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”</p> <p>Computational limits have dogged neural networks from their earliest incarnation —&nbsp;<a href=”https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon”>the perceptron</a>&nbsp;— in the 1950s.&nbsp;As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters&nbsp;— the mathematical operations, or weights, that tie the model together —&nbsp;making it 100 times bigger than its predecessor, itself just a year old.</p> <p>In&nbsp;<a href=”https://arxiv.org/pdf/2007.05558.pdf”>work posted</a>&nbsp;on the pre-print server arXiv,&nbsp;Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.</p> <p><strong>Toward leaner, greener algorithms</strong></p> <p>The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact.&nbsp;In a paper at the&nbsp;<a href=”https://eccv2020.eu/”>European Conference on Computer Vision</a> (ECCV) in August, researchers at the&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/”>MIT-IBM Watson AI Lab</a>&nbsp;describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.</p> <p>Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.</p> <p>“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author,&nbsp;<a href=”http://rogerioferis.com/”>Rogerio Feris</a>, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”</p> <p>In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search.&nbsp;<a href=”https://songhan.mit.edu/”>Song Han</a>, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.&nbsp;</p> <p>In&nbsp;<a href=”https://hanlab.mit.edu/projects/spvnas/papers/spvnas_eccv.pdf”>a paper at ECCV</a>, Han and his colleagues propose a model architecture for three-dimensional scene&nbsp;recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used&nbsp;an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.&nbsp;</p> <p>In&nbsp;<a href=”https://arxiv.org/pdf/2005.14187.pdf”>another recent paper</a>, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny&nbsp;Raspberry Pi.&nbsp;Separating the search and training process leads to huge reductions in computation, they say.</p> <p>In a third approach, researchers are probing the essence of deep nets to see if it might be possible to&nbsp;train a small part of even hyper-efficient networks like those above.&nbsp;Under their proposed <a href=”https://arxiv.org/abs/1803.03635″>lottery ticket hypothesis</a>, PhD student&nbsp;<a href=”http://www.jfrankle.com/”>Jonathan Frankle</a>&nbsp;and MIT Professor&nbsp;<a href=”https://people.csail.mit.edu/mcarbin/”>Michael Carbin</a>&nbsp;proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”&nbsp;</p> <p>They showed that an algorithm could retroactively&nbsp;find these winning subnetworks in&nbsp;small image-classification models. Now,&nbsp;<a href=”https://arxiv.org/abs/1912.05671″>in a paper</a>&nbsp;at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer&nbsp;influences the training outcome.&nbsp;</p> <p>In less than two years, the lottery ticket idea has been cited more than&nbsp;<a href=”https://scholar.google.com/citations?user=MlLJapIAAAAJ&amp;hl=en”>more than 400 times</a>, including by Facebook researcher Ari Morcos, who has&nbsp;<a href=”https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks/”>shown</a>&nbsp;that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.&nbsp;</p> <p>“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”</p> <p>Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.</p> <p><strong>Hardware designed for efficient deep net algorithms</strong></p> <p>As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.</p> <p>Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with&nbsp;the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.&nbsp;</p> <p>Much of this work hinges on finding ways to&nbsp;store and reuse data locally, across the chip’s processing cores,&nbsp;rather than waste time and energy shuttling data to and from&nbsp;a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.</p> <p><a href=”https://www.rle.mit.edu/eems/”>Vivienne Sze</a>, a professor at MIT, has literally written&nbsp;<a href=”http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1530″>the book</a>&nbsp;on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called&nbsp;<a href=”https://ieeexplore.ieee.org/document/8686088″>Eyeriss 2</a>, the chip uses 10 times less energy than a mobile GPU.</p> <p>Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.&nbsp;</p> <p>“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”</p> <p>Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance&nbsp;<a href=”https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/”>was fueled</a>&nbsp;by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.</p> <p>Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.</p> <p>An electrochemical device, developed at MIT and recently&nbsp;<a href=”https://www.nature.com/articles/s41467-020-16866-6″>published in <em>Nature Communications</em></a>, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them.&nbsp;The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.</p> <p>“Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says&nbsp;the study’s senior author, <a href=”https://web.mit.edu/nse/people/faculty/yildiz.html”>Bilge Yildiz</a>, a professor at MIT.</p> <p>Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.</p> <p>“For all of these reasons, we need to embrace efficient AI,” she says.</p> Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain. Image: Niki Hinkle/MIT Spectrum https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence. Fri, 07 Aug 2020 17:00:00 -0400 https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Kim Martineau | MIT Quest for Intelligence <p>In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can&nbsp;<a href=”https://www.gwern.net/GPT-3″>write creative fiction</a>, translate&nbsp;<a href=”https://twitter.com/michaeltefula/status/1285505897108832257″>legalese into plain English</a>, and&nbsp;<a href=”https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html”>answer obscure trivia</a>&nbsp;questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.</p> <p>But it came at a hefty price: at least $4.6 million and&nbsp;<a href=”https://lambdalabs.com/blog/demystifying-gpt-3/”>355 years in computing time</a>, assuming the model&nbsp;was trained on a standard neural network chip, or GPU.&nbsp;The model’s colossal size — 1,000 times larger than&nbsp;<a href=”https://arxiv.org/pdf/1810.04805.pdf”>a typical</a>&nbsp;language model — is the main factor in&nbsp;its high cost.</p> <p>“You have to throw a lot more computation at something to get a little improvement in performance,” says&nbsp;<a href=”http://ide.mit.edu/about-us/people/neil-thompson”>Neil Thompson</a>, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”</p> <p>Some of the excitement over AI’s recent progress has shifted to alarm. In a&nbsp;<a href=”https://arxiv.org/abs/1906.02243″>study last year</a>, researchers at the University of Massachusetts at Amherst estimated that training&nbsp;a large deep-learning model&nbsp;produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency.&nbsp;Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.&nbsp;</p> <p>“We need to rethink the entire stack — from software to hardware,” says&nbsp;<a href=”http://olivalab.mit.edu/audeoliva.html”>Aude Oliva</a>, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence.&nbsp;“Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”</p> <p>Computational limits have dogged neural networks from their earliest incarnation —&nbsp;<a href=”https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon”>the perceptron</a>&nbsp;— in the 1950s.&nbsp;As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters&nbsp;— the mathematical operations, or weights, that tie the model together —&nbsp;making it 100 times bigger than its predecessor, itself just a year old.</p> <p>In&nbsp;<a href=”https://arxiv.org/pdf/2007.05558.pdf”>work posted</a>&nbsp;on the pre-print server arXiv,&nbsp;Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.</p> <p><strong>Toward leaner, greener algorithms</strong></p> <p>The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact.&nbsp;In a paper at the&nbsp;<a href=”https://eccv2020.eu/”>European Conference on Computer Vision</a> (ECCV) in August, researchers at the&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/”>MIT-IBM Watson AI Lab</a>&nbsp;describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.</p> <p>Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.</p> <p>“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author,&nbsp;<a href=”http://rogerioferis.com/”>Rogerio Feris</a>, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”</p> <p>In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search.&nbsp;<a href=”https://songhan.mit.edu/”>Song Han</a>, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.&nbsp;</p> <p>In&nbsp;<a href=”https://hanlab.mit.edu/projects/spvnas/papers/spvnas_eccv.pdf”>a paper at ECCV</a>, Han and his colleagues propose a model architecture for three-dimensional scene&nbsp;recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used&nbsp;an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.&nbsp;</p> <p>In&nbsp;<a href=”https://arxiv.org/pdf/2005.14187.pdf”>another recent paper</a>, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny&nbsp;Raspberry Pi.&nbsp;Separating the search and training process leads to huge reductions in computation, they say.</p> <p>In a third approach, researchers are probing the essence of deep nets to see if it might be possible to&nbsp;train a small part of even hyper-efficient networks like those above.&nbsp;Under their proposed <a href=”https://arxiv.org/abs/1803.03635″>lottery ticket hypothesis</a>, PhD student&nbsp;<a href=”http://www.jfrankle.com/”>Jonathan Frankle</a>&nbsp;and MIT Professor&nbsp;<a href=”https://people.csail.mit.edu/mcarbin/”>Michael Carbin</a>&nbsp;proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”&nbsp;</p> <p>They showed that an algorithm could retroactively&nbsp;find these winning subnetworks in&nbsp;small image-classification models. Now,&nbsp;<a href=”https://arxiv.org/abs/1912.05671″>in a paper</a>&nbsp;at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer&nbsp;influences the training outcome.&nbsp;</p> <p>In less than two years, the lottery ticket idea has been cited more than&nbsp;<a href=”https://scholar.google.com/citations?user=MlLJapIAAAAJ&amp;hl=en”>more than 400 times</a>, including by Facebook researcher Ari Morcos, who has&nbsp;<a href=”https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks/”>shown</a>&nbsp;that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.&nbsp;</p> <p>“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”</p> <p>Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.</p> <p><strong>Hardware designed for efficient deep net algorithms</strong></p> <p>As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.</p> <p>Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with&nbsp;the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.&nbsp;</p> <p>Much of this work hinges on finding ways to&nbsp;store and reuse data locally, across the chip’s processing cores,&nbsp;rather than waste time and energy shuttling data to and from&nbsp;a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.</p> <p><a href=”https://www.rle.mit.edu/eems/”>Vivienne Sze</a>, a professor at MIT, has literally written&nbsp;<a href=”http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1530″>the book</a>&nbsp;on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called&nbsp;<a href=”https://ieeexplore.ieee.org/document/8686088″>Eyeriss 2</a>, the chip uses 10 times less energy than a mobile GPU.</p> <p>Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.&nbsp;</p> <p>“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”</p> <p>Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance&nbsp;<a href=”https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/”>was fueled</a>&nbsp;by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.</p> <p>Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.</p> <p>An electrochemical device, developed at MIT and recently&nbsp;<a href=”https://www.nature.com/articles/s41467-020-16866-6″>published in <em>Nature Communications</em></a>, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them.&nbsp;The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.</p> <p>“Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says&nbsp;the study’s senior author, <a href=”https://web.mit.edu/nse/people/faculty/yildiz.html”>Bilge Yildiz</a>, a professor at MIT.</p> <p>Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.</p> <p>“For all of these reasons, we need to embrace efficient AI,” she says.</p> Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain. Image: Niki Hinkle/MIT Spectrum https://news.mit.edu/2020/jessica-varner-chemical-architecture-0806 PhD student Jessica Varner traces the way synthetic building materials have transformed our environment. Thu, 06 Aug 2020 00:00:00 -0400 https://news.mit.edu/2020/jessica-varner-chemical-architecture-0806 Sofia Tong | MIT News correspondent <p>Just months before starting her PhD, Jessica Varner and her partner bought a small house built in 1798. Located on tidal wetlands along Connecticut’s Patchogue River, the former residence of an ironworker had endured over two centuries of history and neglect.</p> <p>As Varner began to slowly restore the house — discovering its nail-less construction and thin horsehair plaster walls, learning plumbing skills, and burning oyster shells to make lime wash — she discovered a deep connection between her work inside and outside academia.</p> <p>For her dissertation in MIT’s History, Theory and Criticism of Architecture and Art program, Varner had been investigating how the chemical industry wooed the building and construction industry with the promise of “invisible,” “new,” and “durable” synthetic materials at the turn of the 20th century. In the process, these companies helped transform modern architecture while also disregarding or actively obscuring the health and environmental risks posed by these materials. While researching the history of these dyes, additives, and foams, Varner was also considering the presence of similar synthetics in her own new home.</p> <p>Coming into closer contact with these types of materials as a builder herself gave Varner a new perspective on the widespread implications of her research. “I think with my hands … and both projects began to inform each other,” she says. “Making and writing at the same time, I’m amazed how much this house is a part of the work.”</p> <p>The reverse proved true as well. Next year Varner will launch the <a href=”https://www.smallerlarge.org/activism” target=”_blank”>Black House Project</a>, an interdisciplinary artist-in-residence space on the Connecticut property. Artists who participate will be asked to engage with a seasonal theme relating to the intersection of history, environment, and community. The inaugural theme will be, “building from the ashes,” with a focus on burning and invasive species.</p> <p><strong>A personal chemical history</strong></p> <p>The chemical industry has a longer history for Varner than she even initially understood: She comes from a long line of farming families in Nebraska, a state with a complex relationship with the agricultural-chemical industry.</p> <p>“That was just our way of life and we never questioned it,” she says of the way farm life became entwined with the chemical necessities and economic hardships of American industrial agriculture. She recalls spraying herbicide, without a mask, on thistles on the farm after her family received government letters threatening daily fines if her family did not remove the plant. She also remembers how their farm, and much of the region, depended on seeds and other products from DeKalb.</p> <p>“Coming from a place that depends so much on the economy of an industry, there are nuances and deeper layers to the story” of modern agriculture, she says, noting that the subsistence farming and often industrial farming go hand in hand.</p> <p>At MIT, Varner has continued to probe beneath the surface of how chemical products are promoted and adopted. For her thesis, with the help of a Fulbright scholarship, she began digging through the chemical companies’ corporate archives. Her research has revealed how these companies generated research strategies, advertising, and publicity to transform the materials of the “modern interior and exterior.”</p> <p>Underneath a veneer of technological innovation and promises of novelty, Varner argues, these companies carefully masked their supply chains, adjusted building codes, and created marketing teams knowns as “truth squads,” which monitored and reshaped conversations around these products and growing concerns about their environmental harms. The result, she writes in her dissertation, was “one of the most successful, and toxic, material transformations in modern history.”</p> <p><strong>Bridging activism and academia</strong></p> <p>Varner has a long-running interest in environmental activism, from the conservation and restoration efforts in her home state, to vegetarianism, to studying glaciers in Alaska, to her current conception of the Black House Project. “At every point I feel like my life has had environmental activism in it,” she says.</p> <p>Environmental concerns have always been an integral part of her studies as well. After her undergraduate education at the University of Nebraska, Varner went on to study architecture and environmental design at Yale University, where she studied the debates between climate scientists and architects in the 1970s. Then she headed to Los Angeles as a practicing architect and professor.</p> <p>Working with as a designer with Michael Maltzan Architecture while teaching seminars and studios on at the University of Southern California and Woodbury University, she realized her students had bigger, historical questions, such as about the origin of sustainability catchphrases like “passive cooling,” “circular economy,” and “net-zero.” “There were deeper questions behind what environmentalism was, how you can enact it, how you know what the rules of sustainability are, and I realized I didn’t have answers,” Varner says. “It was taken for granted.”</p> <p>Those questions brought her to MIT, where she says the cross-cutting nature of her work benefitted from the Institute’s intersection with chemistry and engineering and history of technology. “The questions I was asking were interdisciplinary questions, so it was helpful to have those people around to bounce ideas off of,” she says.</p> <p>This fall, Varner will return to MIT as a lecturer while also working with the Environmental Data and Governance Initiative. At EDGI, she is the assistant curator for the EPA Interviewing Working Group, an ongoing oral history project chronicling the inner workings of the EPA and the way the organization has been affected by the current administration.</p> <p>“I’m excited to get back in the classroom,” she says, as well as finding a new way to take her academic interests into a more activist and policy-oriented sphere at EDGI. “I definitely think that’s what MIT brought to me in my education, other ways to carry your knowledge and your expertise to engage at different levels. It’s what I want to keep, going forward as a graduate.”</p> MIT graduate student Jessica Varner has explored how the chemical industry wooed the building and construction industry with new synthetic materials at the turn of the 20th century. The result, she writes in her dissertation, was “one of the most successful, and toxic, material transformations in modern history.” Photo: Sarah Cal https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Findings also suggest exoplanets lying within habitable zones may be susceptible to ice ages. Wed, 29 Jul 2020 09:51:35 -0400 https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Jennifer Chu | MIT News Office <p>At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.</p><p>Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.</p><p>But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.</p><p>These findings, published today in the <em>Proceedings of the Royal Society A, </em>suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays.&nbsp;</p><p>The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.</p><p>“You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”</p><p>Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.</p><p><strong>A runaway snowball</strong></p><p>Regardless of the particular processes that triggered&nbsp;past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation.</p><p><img alt=”” src=”/sites/default/files/images/inline/images/snow-ball-earth.gif” style=”width: 500px; height: 281px;” /></p> <p>Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.</p><p>Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.</p><p>“There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”</p><p>He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.</p><p>“In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.</p><p><strong>“Be wary of speed”</strong></p><p>The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.</p><p>Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.</p><p>“It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.</p><p>The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.</p><p>“Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out.&nbsp;“For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”</p><p>This research was funded, in part, by the MIT Lorenz Center.</p> The trigger for “Snowball Earth” global ice ages may have been drops in incoming sunlight that happened quickly, in geological terms, according to an MIT study. Image: Wikimedia, Oleg Kuznetsov https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Findings also suggest exoplanets lying within habitable zones may be susceptible to ice ages. Wed, 29 Jul 2020 09:51:35 -0400 https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Jennifer Chu | MIT News Office <p>At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.</p><p>Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.</p><p>But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.</p><p>These findings, published today in the <em>Proceedings of the Royal Society A, </em>suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays.&nbsp;</p><p>The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.</p><p>“You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”</p><p>Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.</p><p><strong>A runaway snowball</strong></p><p>Regardless of the particular processes that triggered&nbsp;past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation.</p><p><img alt=”” src=”/sites/default/files/images/inline/images/snow-ball-earth.gif” style=”width: 500px; height: 281px;” /></p> <p>Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.</p><p>Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.</p><p>“There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”</p><p>He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.</p><p>“In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.</p><p><strong>“Be wary of speed”</strong></p><p>The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.</p><p>Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.</p><p>“It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.</p><p>The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.</p><p>“Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out.&nbsp;“For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”</p><p>This research was funded, in part, by the MIT Lorenz Center.</p> The trigger for “Snowball Earth” global ice ages may have been drops in incoming sunlight that happened quickly, in geological terms, according to an MIT study. Image: Wikimedia, Oleg Kuznetsov https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Findings also suggest exoplanets lying within habitable zones may be susceptible to ice ages. Wed, 29 Jul 2020 09:51:35 -0400 https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Jennifer Chu | MIT News Office <p>At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.</p><p>Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.</p><p>But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.</p><p>These findings, published today in the <em>Proceedings of the Royal Society A, </em>suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays.&nbsp;</p><p>The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.</p><p>“You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”</p><p>Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.</p><p><strong>A runaway snowball</strong></p><p>Regardless of the particular processes that triggered&nbsp;past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation.</p><p><img alt=”” src=”/sites/default/files/images/inline/images/snow-ball-earth.gif” style=”width: 500px; height: 281px;” /></p> <p>Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.</p><p>Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.</p><p>“There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”</p><p>He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.</p><p>“In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.</p><p><strong>“Be wary of speed”</strong></p><p>The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.</p><p>Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.</p><p>“It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.</p><p>The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.</p><p>“Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out.&nbsp;“For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”</p><p>This research was funded, in part, by the MIT Lorenz Center.</p> The trigger for “Snowball Earth” global ice ages may have been drops in incoming sunlight that happened quickly, in geological terms, according to an MIT study. Image: Wikimedia, Oleg Kuznetsov https://news.mit.edu/2020/gift-tackling-poverty-climate-change-0729 The King Climate Action Initiative at J-PAL will develop large-scale climate-response programs for some of the world’s most vulnerable populations. Wed, 29 Jul 2020 09:12:48 -0400 https://news.mit.edu/2020/gift-tackling-poverty-climate-change-0729 Peter Dizikes | MIT News Office <p>With a founding $25 million gift from King Philanthropies, MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching a new initiative to solve problems at the nexus of climate change and global poverty.</p><p>The new program, the King Climate Action Initiative (K-CAI), was announced today by King Philanthropies and J-PAL, and will start immediately. K-CAI plans to rigorously study programs reducing the effects of climate change on vulnerable populations, and then work with policymakers to scale up the most successful interventions.</p><p>“To protect our well-being and improve the lives of people living in poverty, we must be better stewards of our climate and our planet,” says Esther Duflo, director of J-PAL and the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT. “Through K-CAI, we will work to build a movement for evidence-informed policy at the nexus of climate change and poverty alleviation similar to the movement J-PAL helped build in global development. The moment is perhaps unique: The only silver lining of this global pandemic is that it reminds us that nature is sometimes stronger than us. It is a moment to act decisively to change behavior to stave off a much larger catastrophe in the future.”</p><p>K-CAI constitutes an ambitious effort: The initiative intends to help improve the lives of at least 25 million people over the next decade. K-CAI will announce a call for proposals this summer and select its first funded projects by the end of 2020.</p><p>“We are short on time to take action on climate change,” says Robert King, co-founder of King Philanthropies. “K-CAI reflects our commitment to confront this global crisis by focusing on solutions that benefit people in extreme poverty. They are already the hardest hit by climate change, and if we fail to act, their circumstances will become even more dire.”</p><p>There are currently an estimated 736 million people globally living in extreme poverty, on as little as $1.90 per day or less. The World Bank estimates that climate change could push roughly another 100 million into extreme poverty by 2030.</p><p>As vast as its effects may be, climate change also presents a diverse set of problems to tackle. Among other things, climate change, as well as fossil-fuel pollution, is expected to reduce crop yields, raise food prices, and generate more malnutrition; increase the prevalence of respiratory illness, heat stress, and numerous other diseases; and increase extreme weather events, wiping out homes, livelihoods, and communities.</p><p>With this in mind, the initiative will focus on specific projects within four areas: climate change mitigation, to reduce carbon emissions; pollution reduction; adaptation to ongoing climate change; and shifting toward cleaner, reliable, and more affordable souces of energy. In each area, K-CAI will study smaller-scale programs, evaluate their impact, and work with partners to scale up the projects with the most effective solutions.</p><p>Projects backed by J-PAL have already had an impact in these areas. In one recent study, J-PAL-affiliated researchers found that changing the emissions audit system in Gujarat, India, reduced industrial-plant pollution by 28 percent; the state then implemented the reforms. In another study in India, J-PAL affiliated researchers found that farmers using a flood-resistant rice variety called Swarna-Sub1 increased their crop yields by 41 percent.</p><p>In Zambia, a study by researchers in the J-PAL network showed that lean-season loans for farmers increased agricultural output by 8 percent; in Uganda, J-PAL affiliated researchers found that a payment system to landowners cut deforestation nearly in half and is a cost-effective way to lower carbon emissions.</p><p>Other J-PAL field experiments in progress include one providing cash payments that stop farmers in Punjab, India, from burning crops, which generates half the air pollution in Delhi; another implementing an emissions-trading plan in India; and a new program to harvest rainwater more effectively in Niger. All told, J-PAL researchers have evaluated over 40 programs focused on climate, energy, and the environment.</p><p>By conducting these kinds of field experiments, and implementing some widely, K-CAI aims to apply the same approach J-PAL has directed toward multiple aspects of poverty alleviation, including food production, health care, education, and transparent governance.</p><p>A unique academic enterprise, J-PAL emphasizes randomized controlled trials to identify useful poverty-reduction programs, then works with governments and nongovernmental organizations to implement them. All told, programs evaluated by J-PAL affiliated researchers and found to be effective have been scaled up to reach 400 million people worldwide since the lab’s founding in 2003.</p><p>“J-PAL has distinctive core competencies that equip it to achieve outsized impact over the long run,” says Kim Starkey, president and CEO of King Philanthropies. “Its researchers excel at conducting randomized evaluations to figure out what works, its leadership is tremendous, and J-PAL as an organization has a rare, demonstrated ability to partner with governments and other organizations to scale up proven interventions and programs.”</p><p>K-CAI aims to conduct an increasing number of field experiments over the initial five-year period and focus on implementing the highest-quality programs at scale over the subsequent five years. As Starkey observes, this approach may generate increasing interest from additional partners.</p><p>“There is an immense need for a larger body of evidence about what interventions work at this nexus of climate change and extreme poverty,” Starkey says. “The findings of the King Climate Action Initiative will inform policymakers and funders as they seek to prioritize opportunities with the highest impact.”</p><p>King Philanthropies was founded by Robert E. (Bob) King and Dorothy J. (Dottie) King in 2016. The organization has a goal of making “a meaningful difference in the lives of the world’s poorest people” by developing and supporting a variety of antipoverty initiatives.</p><p>J-PAL was co-founded by Duflo; Abhijit Banerjee, the Ford International Professor of Economics at MIT; and Sendhil Mullainathan, now a professor at the University of Chicago’s Booth School of Business. It has over 200 affiliated researchers at more than 60 universities across the globe. J-PAL is housed in the Department of Economics in MIT’s School of Humanities,&nbsp;Arts, and Social Sciences.</p><p>Last fall, Duflo and Banerjee, along with long-time collaborator Michael Kremer of Harvard University, were awarded the Nobel Prize in economic sciences. The Nobel citation observed that their work has “dramatically improved our ability to fight poverty in practice” and provided a “new approach to obtaining reliable answers about the best ways to fight global poverty.”</p><p>K-CAI will be co-chaired by two professors, Michael Greenstone and Kelsey Jack, who have extensive research experience in environmental economics. Both are already affiliated researchers with J-PAL.</p><p>Greenstone is the Milton Friedman Distinguished Service Professor in Economics at the University of Chicago. He is also director of the Energy Policy Institute at the University of Chicago. Greenstone, who was a tenured faculty member in MIT’s Department of Economics from 2003 to 2014, has published high-profile work on energy access, the consequences of air pollution, and the effectiveness of policy measures, among other topics.</p><p>Jack is an associate professor in the Bren School of Environmental Science and Management at the University of California at Santa Barbara. She is an expert on environment-related programs in developing countries, with a focus on incentives that encourage the private-sector development of environmental goods. Jack was previously a faculty member at Tufts University, and a postdoc at MIT in 2010-11, working on J-PAL’s Agricultural Technology Adoption Initiative.</p> Over the next decade, the King Climate Action Initiative (K-CAI) intends to help improve the lives of at least 25 million people hard hit by poverty and climate change. Image: MIT News https://news.mit.edu/2020/letter-reif-grand-challenges-climate-change-0723 Thu, 23 Jul 2020 15:18:29 -0400 https://news.mit.edu/2020/letter-reif-grand-challenges-climate-change-0723 MIT News Office <p><em>The following letter was sent to the MIT community today by President L. Rafael Reif.</em></p> <p>To the members of the MIT community,</p> <p>I am delighted to share an important step in MIT’s ongoing efforts to take action against climate change.</p> <p>Thanks to the thoughtful leadership of Vice President for Research Maria Zuber, Associate Provost Richard Lester and a <a href=”https://web.mit.edu/vpr/www/cgc/Climate-Grand-Challenges-Faculty-Advisory-Committee.pdf”>committee of 26 faculty leaders</a> representing all five schools and the college, today we are committing to an ambitious new research effort called <a href=”http://climategrandchallenges.mit.edu/”>Climate Grand Challenges</a>.</p> <p>MIT’s <a href=”http://web.mit.edu/climateaction/ClimateChangeStatement-2015Oct21.pdf”>Plan for Action on Climate Change</a> stressed the need for breakthrough innovations and underscored MIT’s responsibility to lead. Since then, the escalating climate crisis and lagging global response have only intensified the need for action.</p> <p>With this letter, we invite all principal investigators (PIs) from across MIT to help us define a new agenda of transformative research. The threat of climate change demands a host of interlocking solutions; to shape a research program worthy of MIT, we seek bold faculty proposals that address the most difficult problems in the field, problems whose solutions would make the most decisive difference.</p> <p>The focus will be on those hard questions where progress depends on advancing and applying frontier knowledge in the physical, life and social sciences, or advancing and applying cutting-edge technologies, or both; solutions may require the wisdom of many disciplines. Equally important will be to advance the humanistic and scientific understanding of how best to inspire 9 billion humans to adopt the technologies and behaviors the crisis demands.</p> <p>We encourage interested PIs to submit a letter of interest. A group of MIT faculty and outside experts will choose the most compelling – the five or six ideas that offer the most effective levers for rapid, large-scale change. MIT will then focus intensely on securing the funds for the work to succeed. To meet this great rolling emergency for the species, we are seeking and expecting big ideas for sharpening our understanding, combatting climate change itself and adapting constructively to its impacts.</p> <p>You can <a href=”http://climategrandchallenges.mit.edu/”>learn much more about the overall concept as well as specific deadlines and requirements here</a>.</p> <p>This invitation is geared specifically for MIT PIs – but the climate problem deserves wholehearted attention from every one of us. Whatever your role, I encourage you to find ways to be part of the <a href=”https://climate.mit.edu/resources-mit”>broad range of climate events, courses and research and other work</a> already under way at MIT.&nbsp;</p> <p>For decades, MIT students, staff, postdocs, faculty and alumni have poured their energy, insight and ingenuity into countless aspects of the climate problem; in this new work, your efforts are our inspiration and our springboard.&nbsp;</p> <p>We will share next steps in the Climate Grand Challenges process later in the fall semester.</p> <p>Sincerely,</p> <p>L. Rafael Reif</p> https://news.mit.edu/2020/covid-19-solar-output-smog-0722 As the air cleared after lockdowns, solar installations in Delhi produced 8 percent more power, study shows. Wed, 22 Jul 2020 00:00:00 -0400 https://news.mit.edu/2020/covid-19-solar-output-smog-0722 David L. Chandler | MIT News Office <p>As the Covid-19 shutdowns and stay-at-home orders brought much of the world’s travel and commerce to a standstill, people around the world started noticing clearer skies as a result of lower levels of air pollution. Now, researchers have been able to demonstrate that those clearer skies had a measurable impact on the output from solar photovoltaic panels, leading to a more than 8 percent increase in the power output from installations in Delhi.</p> <p>While such an improved output was not unexpected, the researchers say this is the first study to demonstrate and quantify the impact of the reduced air pollution on solar output. The effect should apply to solar installations worldwide, but would normally be very difficult to measure against a background of natural variations in solar panel output caused by everything from clouds to dust on the panels. The extraordinary conditions triggered by the pandemic, with its sudden cessation of normal activities, combined with high-quality air-pollution data from one of the world’s smoggiest cities, afforded the opportunity to harness data from an unprecedented, unplanned natural experiment.</p> <p>The findings are reported today in the journal <em>Joule</em>, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, research scientist Ian Marius Peters, and three others in Singapore and Germany.</p> <p>The study was an extension of previous research the team has been conducting in Delhi for several years. The impetus for the work came after an unusual weather pattern in 2013 swept a concentrated plume of smoke from forest fires in Indonesia across a vast swath of Indonesia, Malaysia, and Singapore, where Peters, who had just arrived in the region, found “it was so bad that you couldn’t see the buildings on the other side of the street.”</p> <p>Since he was already doing research on solar photovoltaics, Peters decided to investigate what effects the air pollution was having on solar panel output. The team had good long-term data on both solar panel output and solar insolation, gathered at the same time by monitoring stations set up adjacent to the solar installations. They saw that during the 18-day-long haze event, the performance of some types of solar panels decreased, while others stayed the same or increased slightly. That distinction proved useful in teasing apart the effects of pollution from other variables that could be at play, such as weather conditions.</p> <p>Peters later learned that a high-quality, years-long record of actual measurements of fine particulate air pollution (particles less than 2.5 micrometers in size) had been collected every hour, year after year, at the U.S. Embassy in Delhi. That provided the necessary baseline for determining the actual effects of pollution on solar panel output; the researchers compared the air pollution data from the embassy with meteorological data on cloudiness and the solar irradiation data from the sensors.</p> <p>They identified a roughly 10 percent overall reduction in output from the solar installations in Delhi because of pollution – enough to make a significant dent in the facilities’ financial projections.</p> <p>To see how the Covid-19 shutdowns had affected the situation, they were able to use the mathematical tools they had developed, along with the embassy’s ongoing data collection, to see the impact of reductions in travel and factory operations. They compared the data from before and after India went into mandatory lockdown on March 24, and also compared this with data from the previous three years.</p> <p>Pollution levels were down by about 50 percent after the shutdown, they found. As a result, the total output from the solar panels was increased by 8.3 percent in late March, and by 5.9 percent in April, they calculated.</p> <p>“These deviations are much larger than the typical variations we have” within a year or from year to year, Peters says — three to four times greater. “So we can’t explain this with just fluctuations.” The amount of difference, he says, is roughly the difference between the expected performance of a solar panel in Houston versus one in Toronto.</p> <p>An 8 percent increase in output might not sound like much, Buonassisi says, but “the margins of profit are very small for these businesses.” If a solar company was expecting to get a 2 percent profit margin out of their expected 100 percent panel output, and suddenly they are getting 108 percent output, that means their margin has increased fivefold, from 2 percent to 10 percent, he points out.</p> <p>The findings provide real data on what can happen in the future as emissions are reduced globally, he says. “This is the first real quantitative evaluation where you almost have a switch that you can turn on and off for air pollution, and you can see the effect,” he says. “You have an opportunity to baseline these models with and without air pollution.”</p> <p>By doing so, he says, “it gives a glimpse into a world with significantly less air pollution.” It also demonstrates that the very act of increasing the usage of solar electricity, and thus displacing fossil-fuel generation that produces air pollution, makes those panels more efficient all the time.</p> <p>Putting solar panels on one’s house, he says, “is helping not only yourself, not only putting money in your pocket, but it’s also helping everybody else out there who already has solar panels installed, as well as everyone else who will install them over the next 20 years.” In a way, a rising tide of solar panels raises all solar panels.</p> <p>Though the focus was on Delhi, because the effects there are so strong and easy to detect, this effect “is true anywhere where you have some kind of air pollution. If you reduce it, it will have beneficial consequences for solar panels,” Peters says.</p> <p>Even so, not every claim of such effects is necessarily real, he says, and the details do matter. For example, clearer skies were also noted across much of Europe as a result of the shutdowns, and some news reports described exceptional output levels from solar farms in Germany and in the U.K. But the researchers say that just turned out to be a coincidence.</p> <p>“The air pollution levels in Germany and Great Britain are generally so low that most PV installations are not significantly affected by them,” Peters says. After checking the data, what contributed most to those high levels of solar output this spring, he says, turned out to be just “extremely nice weather,” which produced record numbers of sunlight hours.</p> <p>The research team included C. Brabec and J. Hauch at the Helmholtz-Institute Erlangen-Nuremberg for Renewable Energies, in Germany, where Peters also now works, and A. Nobre at Cleantech Solar in Singapore. The work was supported by the Bavarian State Government.</p> Shutdowns in response to the Covid-19 pandemic have resulted in lowered air pollution levels around the world. Researchers at MIT, and in Germany and Singapore have found that this resulted in a significant increase in the output from solar photovoltaic installations in Delhi, normally one of the world’s smoggiest cities. Image: Jose-Luis Olivares, MIT https://news.mit.edu/2020/building-more-sustainable-mit-at-home-0715 MIT’s Office of Sustainability puts lessons of resiliency into practice. Wed, 15 Jul 2020 13:55:01 -0400 https://news.mit.edu/2020/building-more-sustainable-mit-at-home-0715 Nicole Morell | MIT Office of Sustainability <p>Like most offices across MIT, the Office of Sustainability (MITOS) has in recent months worked to pivot projects while seeking to understand and participate in the emergence of a new normal as the result of the Covid-19 pandemic. Despite now working off campus, the MITOS team methodology — one that warrants collective engagement, commitment to innovative problem solving, and robust data collection — has continued.</p><p><strong>An expanded look at resiliency</strong></p> <p>When the MIT community transitioned off campus, many began to use the word “resilient” for good reason — it’s one way to describe a community of thousands that quickly learned how to study, research, work, and teach from afar in the face of a major disruption. In the field of sustainability, resiliency is frequently used when referring to how communities can not only continue to function, but thrive during and after flooding or extreme heat events as the result of climate change.</p> <p>In recent months, the term has taken on expanded meaning. “The challenges associated with Covid-19 and its impact on MIT and the greater community has provided a moment to explore what a sustainable, resilient campus and community looks like in practice,” says Director of Sustainability Julie Newman.</p><p>The MIT campus climate resiliency framework codified by MITOS — and in response to a changing climate — has long been organized around the interdependencies of four core systems: community (academic, research, and student life), buildings, utilities, and landscape systems. This same framework is now being applied in part to the MIT response to Covid-19. <strong>“</strong>The MIT campus climate resiliency framework has enabled us to understand the vulnerabilities and capacities within each core system that inhibit or enable fulfillment of MIT’s mission,” explains Brian Goldberg, MITOS assistant director. “The pandemic’s disruption of the community layer provides us with a remarkable test in progress of this adaptive capacity.”</p><p>The campus response to the pandemic has, in fact, informed future modeling and demonstrated how the community can advance its important work even when displaced. “MIT has been able to offer countless virtual resources to maintain a connected community,” Goldberg explains. “While a future major flood could physically displace segments of our community, we’ve now seen that the ability to quickly evacuate and regroup virtually demonstrates a remarkable adaptive capacity.”</p><p><strong>Taking the hive home</strong></p> <p>Also resilient are the flowering plants growing in the Hive Garden — the Institute’s <a href=”http://news.mit.edu/2019/mit-sustainability-garden-creates-buzz-1125″>student-supported pollinator</a> garden. Maintained by MIT Grounds Services alongside students, the closure of campus meant many would miss the first spring bloom in the new garden. To make up for this, a group of UA Sustainability Committee (UA Sustain) students began to brainstorm ways to bring sustainable gardening to the MIT community if they couldn’t come to campus. Working with MITOS, students hatched the idea for the Hive@Home — a project that empowers students and staff to try their hands (and green thumbs) at growing a jalapeno or two, while building community.</p><p>“The Hive@Home is designed to link students and staff through gardening — continuing to strengthen the relationships built between MIT Grounds and the community since the Hive Garden started,” says Susy Jones, senior project manager who is leading the effort for MITOS. With funding from UA Sustain and MindHandHeart, the Hive@Home pilot launched in April with more than four dozen community members receiving vegetable seeds and growing supplies. Now the community is sharing their sprouts and lessons learned on Slack with guidance from MIT Grounds experts like Norm Magnusson and Mike Seaberg, who helped bring the campus garden to life, along with professor of ocean and mechanical engineering Alexandra Techet, who is also an experienced home gardener.</p><p><strong>Lessons learned from Covid-19 response&nbsp;</strong></p> <p>The impacts of Covid-19 continue to provide insights into community behavior and views. Seeing an opportunity to better understand these views, the Sustainability Leadership Committee, in collaboration with the Office of Sustainability, the Environmental Solutions Initiative, Terrascope, and the MIT Energy Initiative, hosted a community sustainability forum where more than 100 participants — including staff, students, and faculty — shared ideas on how they thought the response to Covid-19 could inform sustainability efforts at MIT and beyond. Common themes of human health and well-being, climate action, food security, consumption and waste, sustainability education, and bold leadership emerged from the forum. “The event gave us a view into how MIT can be a sustainability leader in a post Covid-19 world, and how our community would like to see this accomplished,” says Newman.</p><p>Community members also shared a renewed focus on the impacts of consumption and single-use plastics, as well as the idea that remote work can decrease the carbon footprint of the Institute. The Sustainability Leadership Committee is now working to share these insights to drive action and launch new ideas with sustainability partners across campus.&nbsp;</p> <p>These actions are just the beginning, as plans for campus are updated and the MIT community learns and adapts to a new normal at MIT. “We are looking at these ideas as a starting place,” explains Newman. “As we look to a future return to campus, we know the sustainability challenges and opportunities faced will continue to shift thinking about our mobility choices, where we eat, what we buy, and more. We will continue to have these community conversations and work across campus to support a sustainable, safe MIT.”</p><p></p><p></p><p></p><p></p> <p></p> MIT’s campus response to the pandemic has informed future modeling and demonstrated how the community can advance its important work even when displaced. Photo: Christopher Harting https://news.mit.edu/2020/decarbonize-and-diversify-0715 How energy-intensive economies can survive and thrive as the globe ramps up climate action. Wed, 15 Jul 2020 13:40:01 -0400 https://news.mit.edu/2020/decarbonize-and-diversify-0715 Mark Dwortzan | MIT Joint Program on the Science and Policy of Global Change <p>Today, Russia’s economy depends heavily upon its abundant fossil fuel resources. Russia is one of the world’s largest exporters of fossil fuels, and a number of its key exporting industries — including metals, chemicals, and fertilizers — draw on fossil resources. The nation also consumes fossil fuels at a relatively high rate; it’s the world’s fourth-largest emitter of carbon dioxide. As the world shifts away from fossil fuel production and consumption and toward low-carbon development aligned with the near- and long-term goals of the Paris Agreement, how might countries like Russia reshape their energy-intensive economies to avoid financial peril and capitalize on this clean energy transition?</p> <p>In a new <a href=”https://www.tandfonline.com/doi/abs/10.1080/14693062.2020.1781047?journalCode=tcpo20″>study</a> in the journal <em>Climate Policy</em>, researchers at the MIT Joint Program on the Science and Policy of Global Change and Russia’s National Research University Higher School of Economics assess the impacts on the Russian economy of the efforts of the main importers of Russian fossil fuels to comply with the Paris Agreement.</p> <p>The researchers project that expected climate-related actions by importers of Russia’s fossil fuels will lower demand for these resources considerably, thereby reducing the country’s GDP growth rate by nearly 0.5 percent between 2035 and 2050. The study also finds that the Paris Agreement will heighten Russia’s risks of facing market barriers for its exports of energy-intensive goods, and of lagging behind in developing increasingly popular low-carbon energy technologies.</p> <p>Using the Joint Program’s Economic Projection and Policy Analysis model, a multi-region, multi-sector model of the world economy, the researchers evaluated the impact on Russian energy exports and GDP of scenarios representing global climate policy ambition ranging from non-implementation of national Paris pledges to collective action aligned with keeping global warming well below 2 degrees Celsius.</p> <p>The bottom line: Global climate policies will make it impossible for Russia to sustain its current path of fossil fuel export-based development.</p> <p>To maintain and enhance its economic well-being, the study’s co-authors recommend that Russia both decarbonize and diversify its economy in alignment with climate goals. In short, by taxing fossil fuels (e.g., through a production tax or carbon tax), the country could redistribute that revenue to the development of human capital to boost other economic sectors (primarily manufacturing, services, agriculture, and food production), thereby making up for energy-sector losses due to global climate policies. The study projects that the resulting GDP increase could be on the order of 1-4 percent higher than it would be without diversification.</p> <p>“Many energy-exporting countries have tried to diversify their economies, but with limited success,” says <a href=”https://globalchange.mit.edu/about-us/personnel/paltsev-sergey”>Sergey Paltsev</a>, deputy director of the MIT Joint Program, senior research scientist at the MIT Energy Initiative (MITEI) and director of the MIT Joint Program/MITEI Energy-at-Scale Center. “Our study quantifies the dynamics of efforts to achieve economic diversification in which reallocation of funds leads to higher labor productivity and economic growth — all while enabling more aggressive emissions reduction targets.”&nbsp;&nbsp;</p> <p>The study was supported by the Basic Research Program of the National Research University Higher School of Economics and the MIT Skoltech Seed Fund Program.</p> Human capital development in Russia through increased per-student expenditure could lead to long-term benefits in manufacturing, services, agriculture, food production, and other sectors. Seen here: Russian students from Tyumen State University. Photo courtesy of the United Nations Development Program. https://news.mit.edu/2020/decarbonize-and-diversify-0715 How energy-intensive economies can survive and thrive as the globe ramps up climate action. Wed, 15 Jul 2020 13:40:01 -0400 https://news.mit.edu/2020/decarbonize-and-diversify-0715 Mark Dwortzan | MIT Joint Program on the Science and Policy of Global Change <p>Today, Russia’s economy depends heavily upon its abundant fossil fuel resources. Russia is one of the world’s largest exporters of fossil fuels, and a number of its key exporting industries — including metals, chemicals, and fertilizers — draw on fossil resources. The nation also consumes fossil fuels at a relatively high rate; it’s the world’s fourth-largest emitter of carbon dioxide. As the world shifts away from fossil fuel production and consumption and toward low-carbon development aligned with the near- and long-term goals of the Paris Agreement, how might countries like Russia reshape their energy-intensive economies to avoid financial peril and capitalize on this clean energy transition?</p> <p>In a new <a href=”https://www.tandfonline.com/doi/abs/10.1080/14693062.2020.1781047?journalCode=tcpo20″>study</a> in the journal <em>Climate Policy</em>, researchers at the MIT Joint Program on the Science and Policy of Global Change and Russia’s National Research University Higher School of Economics assess the impacts on the Russian economy of the efforts of the main importers of Russian fossil fuels to comply with the Paris Agreement.</p> <p>The researchers project that expected climate-related actions by importers of Russia’s fossil fuels will lower demand for these resources considerably, thereby reducing the country’s GDP growth rate by nearly 0.5 percent between 2035 and 2050. The study also finds that the Paris Agreement will heighten Russia’s risks of facing market barriers for its exports of energy-intensive goods, and of lagging behind in developing increasingly popular low-carbon energy technologies.</p> <p>Using the Joint Program’s Economic Projection and Policy Analysis model, a multi-region, multi-sector model of the world economy, the researchers evaluated the impact on Russian energy exports and GDP of scenarios representing global climate policy ambition ranging from non-implementation of national Paris pledges to collective action aligned with keeping global warming well below 2 degrees Celsius.</p> <p>The bottom line: Global climate policies will make it impossible for Russia to sustain its current path of fossil fuel export-based development.</p> <p>To maintain and enhance its economic well-being, the study’s co-authors recommend that Russia both decarbonize and diversify its economy in alignment with climate goals. In short, by taxing fossil fuels (e.g., through a production tax or carbon tax), the country could redistribute that revenue to the development of human capital to boost other economic sectors (primarily manufacturing, services, agriculture, and food production), thereby making up for energy-sector losses due to global climate policies. The study projects that the resulting GDP increase could be on the order of 1-4 percent higher than it would be without diversification.</p> <p>“Many energy-exporting countries have tried to diversify their economies, but with limited success,” says <a href=”https://globalchange.mit.edu/about-us/personnel/paltsev-sergey”>Sergey Paltsev</a>, deputy director of the MIT Joint Program, senior research scientist at the MIT Energy Initiative (MITEI) and director of the MIT Joint Program/MITEI Energy-at-Scale Center. “Our study quantifies the dynamics of efforts to achieve economic diversification in which reallocation of funds leads to higher labor productivity and economic growth — all while enabling more aggressive emissions reduction targets.”&nbsp;&nbsp;</p> <p>The study was supported by the Basic Research Program of the National Research University Higher School of Economics and the MIT Skoltech Seed Fund Program.</p> Human capital development in Russia through increased per-student expenditure could lead to long-term benefits in manufacturing, services, agriculture, food production, and other sectors. Seen here: Russian students from Tyumen State University. Photo courtesy of the United Nations Development Program. https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Researchers design an effective treatment for both exhaust and ambient air. Thu, 09 Jul 2020 15:25:01 -0400 https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Nancy W. Stauffer | MIT Energy Initiative <p>An essential component of any&nbsp;climate change mitigation plan is cutting carbon dioxide (CO<sub>2</sub>) emissions from human activities. Some power plants now have CO<sub>2</sub>&nbsp;capture equipment that grabs CO<sub>2</sub>&nbsp;out of their exhaust. But those systems are each the size of a chemical plant, cost hundreds of millions of dollars, require a lot of energy to run, and work only on exhaust streams that contain high concentrations of CO<sub>2</sub>. In short, they’re not a solution for airplanes, home heating systems, or automobiles.</p> <p>To make matters worse, capturing CO<sub>2</sub>&nbsp;emissions from all anthropogenic sources may not solve the climate problem. “Even if all those emitters stopped tomorrow morning, we would still have to do something about the amount of CO<sub>2 </sub>in the air if we’re going to restore preindustrial atmospheric levels at a rate relevant to humanity,” says Sahag Voskian SM ’15, PhD ’19, co-founder and chief technology officer at Verdox, Inc. And developing a technology that can capture the CO<sub>2</sub>&nbsp;in the air is a particularly hard problem, in part because the CO<sub>2</sub>&nbsp;occurs in such low concentrations.</p> <p><strong>The CO<sub>2</sub></strong>&nbsp;<strong>capture challenge</strong></p> <p>A key problem with CO<sub>2</sub>&nbsp;capture is finding a “sorbent” that will pick up CO<sub>2</sub>&nbsp;in a stream of gas and then release it so the sorbent is clean and ready for reuse and the released CO<sub>2</sub>&nbsp;stream can be utilized or sent to a sequestration site for long-term storage. Research has mainly focused on sorbent materials present as small particles whose surfaces contain “active sites” that capture CO<sub>2</sub>&nbsp;— a process called adsorption. When the system temperature is lowered (or pressure increased), CO<sub>2</sub>&nbsp;adheres to the particle surfaces. When the temperature is raised (or pressure reduced), the CO<sub>2</sub>&nbsp;is released. But achieving those temperature or pressure “swings” takes considerable energy, in part because it requires treating the whole mixture, not just the CO<sub>2</sub>-bearing sorbent.</p> <p>In 2015, Voskian, then a PhD candidate in chemical engineering, and&nbsp;<a href=”http://energy.mit.edu/profile/t-alan-hatton/”>T. Alan Hatton</a>, the Ralph Landau Professor of Chemical Engineering and co-director of the MIT Energy Initiative’s&nbsp;<a href=”https://energy.mit.edu/ccus/”>Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage</a>, began to take a closer look at the temperature- and pressure-swing approach. “We wondered if we could get by with using only a renewable resource — like renewably sourced electricity — rather than heat or pressure,” says Hatton. Using electricity to elicit the chemical reactions needed for CO<sub>2</sub>&nbsp;capture and conversion had been studied for several decades, but Hatton and Voskian had a new idea about how to engineer a more efficient adsorption device.</p> <p>Their work focuses on a special class of molecules called quinones. When quinone molecules are forced to take on extra electrons — which means they’re negatively charged — they have a high chemical affinity for CO<sub>2</sub>&nbsp;molecules and snag any that pass. When the extra electrons are removed from the quinone molecules, the quinone’s chemical affinity for CO<sub>2</sub> instantly disappears, and the molecules release the captured CO<sub>2</sub>.&nbsp;</p> <p>Others have investigated the use of quinones and an electrolyte in a variety of electrochemical devices. In most cases, the devices involve two electrodes — a negative one where the dissolved quinone is activated for CO<sub>2</sub>&nbsp;capture, and a positive one where it’s deactivated for CO<sub>2</sub>&nbsp;release. But moving the solution from one electrode to the other requires complex flow and pumping systems that are large and take up considerable space, limiting where the devices can be used.&nbsp;</p> <p>As an alternative, Hatton and Voskian decided to use the quinone as a solid electrode and — by applying what Hatton calls “a small change in voltage” — vary the electrical charge of the electrode itself to activate and deactivate the quinone. In such a setup, there would be no need to pump fluids around or to raise and lower the temperature or pressure, and the CO<sub>2</sub>&nbsp;would end up as an easy-to-separate attachment on the solid quinone electrode. They deemed their concept “electro-swing adsorption.”</p> <p><strong>The electro-swing cell</strong></p> <p>To put their concept into practice, the researchers designed the electrochemical cell shown in the two diagrams in Figure 1 in the slideshow above. To maximize exposure, they put two quinone electrodes on the outside of the cell, thereby doubling its geometric capacity for CO<sub>2</sub>&nbsp;capture. To switch the quinone on and off, they needed a component that would supply electrons and then take them back. For that job, they used a single ferrocene electrode, sandwiched between the two quinone electrodes but isolated from them by electrolyte membrane separators to prevent short circuits. They connected both quinone electrodes to the ferrocene electrode using the circuit of wires at the top, with a power source along the way.</p> <p>The power source creates a voltage that causes electrons to flow from the ferrocene to the quinone through the wires. The quinone is now negatively charged. When CO<sub>2</sub>-containing air or exhaust is blown past these electrodes, the quinone will capture the CO<sub>2</sub>&nbsp;molecules until all the active sites on its surface are filled up. During the discharge cycle, the direction of the voltage on the cell is reversed, and electrons flow from the quinone back to the ferrocene. The quinone is no longer negatively charged, so it has no chemical affinity for CO<sub>2</sub>. The CO<sub>2</sub>&nbsp;molecules are released and swept out of the system by a stream of purge gas for subsequent use or disposal. The quinone is now regenerated and ready to capture more CO<sub>2</sub>.</p> <p>Two additional components are key to successful operation. First is an electrolyte, in this case a liquid salt, that moistens the cell with positive and negative ions (electrically charged particles). Since electrons only flow through the external wires, those charged ions must travel within the cell from one electrode to the other to close the circuit for continued operation.</p> <p>The second special ingredient is carbon nanotubes. In the electrodes, the quinone and ferrocene are both present as coatings on the surfaces of carbon nanotubes. Nanotubes are both strong and highly conductive, so they provide good support and serve as an efficient conduit for electrons traveling into and out of the quinone and ferrocene.</p> <p>To fabricate a cell, researchers first synthesize a quinone- or ferrocene-based polymer, specifically, polyanthraquinone or polyvinylferrocene. They then make an “ink” by combining the polymer with carbon nanotubes in a solvent. The polymer immediately wraps around the nanotubes, connecting with them on a fundamental level.</p> <p>To make the electrode, they use a non-woven carbon fiber mat as a substrate. They dip the mat into the ink, allow it to dry slowly, and then dip it again, repeating the procedure until they’ve built up a uniform coating of the composite on the substrate. The result of the process is a porous mesh that provides a large surface area of active sites and easy pathways for CO<sub>2</sub>&nbsp;molecules to move in and out.</p> <p>Once the researchers have prepared the quinone and ferrocene electrodes, they assemble the electrochemical cell by laminating the pieces together in the correct order — the quinone electrode, the electrolyte separator, the ferrocene electrode, another separator, and the second quinone electrode. Finally, they moisten the assembled cell with their liquid salt electrolyte.</p> <p><strong>Experimental results</strong></p> <p>To test the behavior of their system, the researchers placed a single electrochemical cell inside a custom-made, sealed box and wired it for electricity input. They then cycled the voltage and measured the key responses and capabilities of the device. The simultaneous trends in charge density put into the cell and CO<sub>2</sub>&nbsp;adsorption per mole of quinone showed that when the quinone electrode is negatively charged, the amount of CO<sub>2</sub>&nbsp;adsorbed goes up. And when that charge is reversed, CO<sub>2</sub>&nbsp;adsorption declines.</p> <p>For experiments under more realistic conditions, the researchers also fabricated full capture units — open-ended modules in which a few cells were lined up, one beside the other, with gaps between them where CO<sub>2</sub>-containing gases could travel, passing the quinone surfaces of adjacent cells.</p> <p>In both experimental systems, the researchers ran tests using inlet streams with CO<sub>2</sub>&nbsp;concentrations ranging from 10 percent down to 0.6 percent. The former is typical of power plant exhaust, the latter closer to concentrations in ambient indoor air. Regardless of the concentration, the efficiency of capture was essentially constant at about 90 percent. (An efficiency of 100 percent would mean that one molecule of CO<sub>2</sub>&nbsp;had been captured for every electron transferred — an outcome that Hatton calls “highly unlikely” because other parasitic processes could be going on simultaneously.) The system used about 1 gigajoule of energy per ton of CO<sub>2</sub>&nbsp;captured. Other methods consume between 1 and 10 gigajoules per ton, depending on the CO<sub>2 </sub>concentration of the incoming gases. Finally, the system was exceptionally durable. Over more than 7,000 charge-discharge cycles, its CO<sub>2</sub>&nbsp;capture capacity dropped by only 30 percent — a loss of capacity that can readily be overcome with further refinements in the electrode preparation, say the researchers.&nbsp;</p> <p>The remarkable performance of their system stems from what Voskian calls the “binary nature of the affinity of quinone to CO<sub>2</sub>.” The quinone has either a high affinity or no affinity at all. “The result of that binary affinity is that our system should be equally effective at treating fossil fuel combustion flue gases and confined or ambient air,” he says.&nbsp;</p> <p><strong>Practical applications</strong></p> <p>The experimental results confirm that the electro-swing device should be applicable in many situations. The device is compact and flexible; it operates at room temperature and normal air pressure; and it requires no large-scale, expensive ancillary equipment — only the direct current power source. Its simple design should enable “plug-and-play” installation in many processes, say the researchers.</p> <p>It could, for example, be retrofitted in sealed buildings to remove CO<sub>2</sub>. In most sealed buildings, ventilation systems bring in fresh outdoor air to dilute the CO<sub>2</sub>&nbsp;concentration indoors. “But making frequent air exchanges with the outside requires a lot of energy to condition the incoming air,” says Hatton. “Removing the CO<sub>2</sub>&nbsp;indoors would reduce the number of exchanges needed.” The result could be large energy savings. Similarly, the system could be used in confined spaces where air exchange is impossible — for example, in submarines, spacecraft, and aircraft — to ensure that occupants aren’t breathing too much CO<sub>2</sub>.</p> <p>The electro-swing system could also be teamed up with renewable sources, such as solar and wind farms, and even rooftop solar panels. Such sources sometimes generate more electricity than is needed on the power grid. Instead of shutting them off, the excess electricity could be used to run a CO<sub>2</sub>&nbsp;capture plant.</p> <p>The researchers have also developed a concept for using their system at power plants and other facilities that generate a continuous flow of exhaust containing CO<sub>2</sub>. At such sites, pairs of units would work in parallel. “One is emptying the pure CO<sub>2</sub>&nbsp;that it captured, while the other is capturing more CO<sub>2</sub>,” explains Voskian. “And then you swap them.” A system of valves would switch the airflow to the freshly emptied unit, while a purge gas would flow through the full unit, carrying the CO<sub>2</sub>&nbsp;out into a separate chamber.</p> <p>The captured CO<sub>2</sub>&nbsp;could be chemically processed into fuels or simply compressed and sent underground for long-term disposal. If the purge gas were also CO<sub>2</sub>, the result would be a steady stream of pure CO<sub>2</sub>&nbsp;that soft-drink makers could use for carbonating drinks and farmers could use for feeding plants in greenhouses. Indeed, rather than burning fossil fuels to get CO<sub>2</sub>, such users could employ an electro-swing unit to generate their own CO<sub>2</sub>&nbsp;while simultaneously removing CO<sub>2</sub> from the air.&nbsp;</p> <p><strong>Costs and scale-up</strong></p> <p>The researchers haven’t yet published a full technoeconomic analysis, but they project capital plus operating costs at $50 to $100 per ton of CO<sub>2</sub>&nbsp;captured. That range is in line with costs using other, less-flexible carbon capture systems. Methods for fabricating the electro-swing cells are also manufacturing-friendly: The electrodes can be made using standard chemical processing methods and assembled using a roll-to-roll process similar to a printing press.&nbsp;</p> <p>And the system can be scaled up as needed. According to Voskian, it should scale linearly: “If you need 10 times more capture capacity, you just manufacture 10 times more electrodes.” Together, he and Hatton, along with Brian M. Baynes PhD ’04, have formed a company called Verdox, and they’re planning to demonstrate that ease of scale-up by developing a pilot plant within the next few years.</p> <p>This research was supported by an MIT Energy Initiative (MITEI)&nbsp;<a href=”http://energy.mit.edu/news-tag/seed-fund/”>Seed Fund</a>&nbsp;grant and by Eni S.p.A. through MITEI. Sahag Voskian was an Eni-MIT Energy Fellow in 2016-17 and 2017-18.</p> <p><em>This article appears in the&nbsp;<a href=”http://energy.mit.edu/energy-futures/spring-2020/”>Spring 2020</a>&nbsp;issue of&nbsp;</em>Energy Futures,<em>&nbsp;the magazine of the MIT Energy Initiative.&nbsp;</em></p> Sahag Voskian SM ’15, PhD ’19 (left) and Professor T. Alan Hatton have developed an electrochemical cell that can capture and release carbon dioxide with just a small change in voltage. Photo: Stuart Darsch https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Researchers design an effective treatment for both exhaust and ambient air. Thu, 09 Jul 2020 15:25:01 -0400 https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Nancy W. Stauffer | MIT Energy Initiative <p>An essential component of any&nbsp;climate change mitigation plan is cutting carbon dioxide (CO<sub>2</sub>) emissions from human activities. Some power plants now have CO<sub>2</sub>&nbsp;capture equipment that grabs CO<sub>2</sub>&nbsp;out of their exhaust. But those systems are each the size of a chemical plant, cost hundreds of millions of dollars, require a lot of energy to run, and work only on exhaust streams that contain high concentrations of CO<sub>2</sub>. In short, they’re not a solution for airplanes, home heating systems, or automobiles.</p> <p>To make matters worse, capturing CO<sub>2</sub>&nbsp;emissions from all anthropogenic sources may not solve the climate problem. “Even if all those emitters stopped tomorrow morning, we would still have to do something about the amount of CO<sub>2 </sub>in the air if we’re going to restore preindustrial atmospheric levels at a rate relevant to humanity,” says Sahag Voskian SM ’15, PhD ’19, co-founder and chief technology officer at Verdox, Inc. And developing a technology that can capture the CO<sub>2</sub>&nbsp;in the air is a particularly hard problem, in part because the CO<sub>2</sub>&nbsp;occurs in such low concentrations.</p> <p><strong>The CO<sub>2</sub></strong>&nbsp;<strong>capture challenge</strong></p> <p>A key problem with CO<sub>2</sub>&nbsp;capture is finding a “sorbent” that will pick up CO<sub>2</sub>&nbsp;in a stream of gas and then release it so the sorbent is clean and ready for reuse and the released CO<sub>2</sub>&nbsp;stream can be utilized or sent to a sequestration site for long-term storage. Research has mainly focused on sorbent materials present as small particles whose surfaces contain “active sites” that capture CO<sub>2</sub>&nbsp;— a process called adsorption. When the system temperature is lowered (or pressure increased), CO<sub>2</sub>&nbsp;adheres to the particle surfaces. When the temperature is raised (or pressure reduced), the CO<sub>2</sub>&nbsp;is released. But achieving those temperature or pressure “swings” takes considerable energy, in part because it requires treating the whole mixture, not just the CO<sub>2</sub>-bearing sorbent.</p> <p>In 2015, Voskian, then a PhD candidate in chemical engineering, and&nbsp;<a href=”http://energy.mit.edu/profile/t-alan-hatton/”>T. Alan Hatton</a>, the Ralph Landau Professor of Chemical Engineering and co-director of the MIT Energy Initiative’s&nbsp;<a href=”https://energy.mit.edu/ccus/”>Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage</a>, began to take a closer look at the temperature- and pressure-swing approach. “We wondered if we could get by with using only a renewable resource — like renewably sourced electricity — rather than heat or pressure,” says Hatton. Using electricity to elicit the chemical reactions needed for CO<sub>2</sub>&nbsp;capture and conversion had been studied for several decades, but Hatton and Voskian had a new idea about how to engineer a more efficient adsorption device.</p> <p>Their work focuses on a special class of molecules called quinones. When quinone molecules are forced to take on extra electrons — which means they’re negatively charged — they have a high chemical affinity for CO<sub>2</sub>&nbsp;molecules and snag any that pass. When the extra electrons are removed from the quinone molecules, the quinone’s chemical affinity for CO<sub>2</sub> instantly disappears, and the molecules release the captured CO<sub>2</sub>.&nbsp;</p> <p>Others have investigated the use of quinones and an electrolyte in a variety of electrochemical devices. In most cases, the devices involve two electrodes — a negative one where the dissolved quinone is activated for CO<sub>2</sub>&nbsp;capture, and a positive one where it’s deactivated for CO<sub>2</sub>&nbsp;release. But moving the solution from one electrode to the other requires complex flow and pumping systems that are large and take up considerable space, limiting where the devices can be used.&nbsp;</p> <p>As an alternative, Hatton and Voskian decided to use the quinone as a solid electrode and — by applying what Hatton calls “a small change in voltage” — vary the electrical charge of the electrode itself to activate and deactivate the quinone. In such a setup, there would be no need to pump fluids around or to raise and lower the temperature or pressure, and the CO<sub>2</sub>&nbsp;would end up as an easy-to-separate attachment on the solid quinone electrode. They deemed their concept “electro-swing adsorption.”</p> <p><strong>The electro-swing cell</strong></p> <p>To put their concept into practice, the researchers designed the electrochemical cell shown in the two diagrams in Figure 1 in the slideshow above. To maximize exposure, they put two quinone electrodes on the outside of the cell, thereby doubling its geometric capacity for CO<sub>2</sub>&nbsp;capture. To switch the quinone on and off, they needed a component that would supply electrons and then take them back. For that job, they used a single ferrocene electrode, sandwiched between the two quinone electrodes but isolated from them by electrolyte membrane separators to prevent short circuits. They connected both quinone electrodes to the ferrocene electrode using the circuit of wires at the top, with a power source along the way.</p> <p>The power source creates a voltage that causes electrons to flow from the ferrocene to the quinone through the wires. The quinone is now negatively charged. When CO<sub>2</sub>-containing air or exhaust is blown past these electrodes, the quinone will capture the CO<sub>2</sub>&nbsp;molecules until all the active sites on its surface are filled up. During the discharge cycle, the direction of the voltage on the cell is reversed, and electrons flow from the quinone back to the ferrocene. The quinone is no longer negatively charged, so it has no chemical affinity for CO<sub>2</sub>. The CO<sub>2</sub>&nbsp;molecules are released and swept out of the system by a stream of purge gas for subsequent use or disposal. The quinone is now regenerated and ready to capture more CO<sub>2</sub>.</p> <p>Two additional components are key to successful operation. First is an electrolyte, in this case a liquid salt, that moistens the cell with positive and negative ions (electrically charged particles). Since electrons only flow through the external wires, those charged ions must travel within the cell from one electrode to the other to close the circuit for continued operation.</p> <p>The second special ingredient is carbon nanotubes. In the electrodes, the quinone and ferrocene are both present as coatings on the surfaces of carbon nanotubes. Nanotubes are both strong and highly conductive, so they provide good support and serve as an efficient conduit for electrons traveling into and out of the quinone and ferrocene.</p> <p>To fabricate a cell, researchers first synthesize a quinone- or ferrocene-based polymer, specifically, polyanthraquinone or polyvinylferrocene. They then make an “ink” by combining the polymer with carbon nanotubes in a solvent. The polymer immediately wraps around the nanotubes, connecting with them on a fundamental level.</p> <p>To make the electrode, they use a non-woven carbon fiber mat as a substrate. They dip the mat into the ink, allow it to dry slowly, and then dip it again, repeating the procedure until they’ve built up a uniform coating of the composite on the substrate. The result of the process is a porous mesh that provides a large surface area of active sites and easy pathways for CO<sub>2</sub>&nbsp;molecules to move in and out.</p> <p>Once the researchers have prepared the quinone and ferrocene electrodes, they assemble the electrochemical cell by laminating the pieces together in the correct order — the quinone electrode, the electrolyte separator, the ferrocene electrode, another separator, and the second quinone electrode. Finally, they moisten the assembled cell with their liquid salt electrolyte.</p> <p><strong>Experimental results</strong></p> <p>To test the behavior of their system, the researchers placed a single electrochemical cell inside a custom-made, sealed box and wired it for electricity input. They then cycled the voltage and measured the key responses and capabilities of the device. The simultaneous trends in charge density put into the cell and CO<sub>2</sub>&nbsp;adsorption per mole of quinone showed that when the quinone electrode is negatively charged, the amount of CO<sub>2</sub>&nbsp;adsorbed goes up. And when that charge is reversed, CO<sub>2</sub>&nbsp;adsorption declines.</p> <p>For experiments under more realistic conditions, the researchers also fabricated full capture units — open-ended modules in which a few cells were lined up, one beside the other, with gaps between them where CO<sub>2</sub>-containing gases could travel, passing the quinone surfaces of adjacent cells.</p> <p>In both experimental systems, the researchers ran tests using inlet streams with CO<sub>2</sub>&nbsp;concentrations ranging from 10 percent down to 0.6 percent. The former is typical of power plant exhaust, the latter closer to concentrations in ambient indoor air. Regardless of the concentration, the efficiency of capture was essentially constant at about 90 percent. (An efficiency of 100 percent would mean that one molecule of CO<sub>2</sub>&nbsp;had been captured for every electron transferred — an outcome that Hatton calls “highly unlikely” because other parasitic processes could be going on simultaneously.) The system used about 1 gigajoule of energy per ton of CO<sub>2</sub>&nbsp;captured. Other methods consume between 1 and 10 gigajoules per ton, depending on the CO<sub>2 </sub>concentration of the incoming gases. Finally, the system was exceptionally durable. Over more than 7,000 charge-discharge cycles, its CO<sub>2</sub>&nbsp;capture capacity dropped by only 30 percent — a loss of capacity that can readily be overcome with further refinements in the electrode preparation, say the researchers.&nbsp;</p> <p>The remarkable performance of their system stems from what Voskian calls the “binary nature of the affinity of quinone to CO<sub>2</sub>.” The quinone has either a high affinity or no affinity at all. “The result of that binary affinity is that our system should be equally effective at treating fossil fuel combustion flue gases and confined or ambient air,” he says.&nbsp;</p> <p><strong>Practical applications</strong></p> <p>The experimental results confirm that the electro-swing device should be applicable in many situations. The device is compact and flexible; it operates at room temperature and normal air pressure; and it requires no large-scale, expensive ancillary equipment — only the direct current power source. Its simple design should enable “plug-and-play” installation in many processes, say the researchers.</p> <p>It could, for example, be retrofitted in sealed buildings to remove CO<sub>2</sub>. In most sealed buildings, ventilation systems bring in fresh outdoor air to dilute the CO<sub>2</sub>&nbsp;concentration indoors. “But making frequent air exchanges with the outside requires a lot of energy to condition the incoming air,” says Hatton. “Removing the CO<sub>2</sub>&nbsp;indoors would reduce the number of exchanges needed.” The result could be large energy savings. Similarly, the system could be used in confined spaces where air exchange is impossible — for example, in submarines, spacecraft, and aircraft — to ensure that occupants aren’t breathing too much CO<sub>2</sub>.</p> <p>The electro-swing system could also be teamed up with renewable sources, such as solar and wind farms, and even rooftop solar panels. Such sources sometimes generate more electricity than is needed on the power grid. Instead of shutting them off, the excess electricity could be used to run a CO<sub>2</sub>&nbsp;capture plant.</p> <p>The researchers have also developed a concept for using their system at power plants and other facilities that generate a continuous flow of exhaust containing CO<sub>2</sub>. At such sites, pairs of units would work in parallel. “One is emptying the pure CO<sub>2</sub>&nbsp;that it captured, while the other is capturing more CO<sub>2</sub>,” explains Voskian. “And then you swap them.” A system of valves would switch the airflow to the freshly emptied unit, while a purge gas would flow through the full unit, carrying the CO<sub>2</sub>&nbsp;out into a separate chamber.</p> <p>The captured CO<sub>2</sub>&nbsp;could be chemically processed into fuels or simply compressed and sent underground for long-term disposal. If the purge gas were also CO<sub>2</sub>, the result would be a steady stream of pure CO<sub>2</sub>&nbsp;that soft-drink makers could use for carbonating drinks and farmers could use for feeding plants in greenhouses. Indeed, rather than burning fossil fuels to get CO<sub>2</sub>, such users could employ an electro-swing unit to generate their own CO<sub>2</sub>&nbsp;while simultaneously removing CO<sub>2</sub> from the air.&nbsp;</p> <p><strong>Costs and scale-up</strong></p> <p>The researchers haven’t yet published a full technoeconomic analysis, but they project capital plus operating costs at $50 to $100 per ton of CO<sub>2</sub>&nbsp;captured. That range is in line with costs using other, less-flexible carbon capture systems. Methods for fabricating the electro-swing cells are also manufacturing-friendly: The electrodes can be made using standard chemical processing methods and assembled using a roll-to-roll process similar to a printing press.&nbsp;</p> <p>And the system can be scaled up as needed. According to Voskian, it should scale linearly: “If you need 10 times more capture capacity, you just manufacture 10 times more electrodes.” Together, he and Hatton, along with Brian M. Baynes PhD ’04, have formed a company called Verdox, and they’re planning to demonstrate that ease of scale-up by developing a pilot plant within the next few years.</p> <p>This research was supported by an MIT Energy Initiative (MITEI)&nbsp;<a href=”http://energy.mit.edu/news-tag/seed-fund/”>Seed Fund</a>&nbsp;grant and by Eni S.p.A. through MITEI. Sahag Voskian was an Eni-MIT Energy Fellow in 2016-17 and 2017-18.</p> <p><em>This article appears in the&nbsp;<a href=”http://energy.mit.edu/energy-futures/spring-2020/”>Spring 2020</a>&nbsp;issue of&nbsp;</em>Energy Futures,<em>&nbsp;the magazine of the MIT Energy Initiative.&nbsp;</em></p> Sahag Voskian SM ’15, PhD ’19 (left) and Professor T. Alan Hatton have developed an electrochemical cell that can capture and release carbon dioxide with just a small change in voltage. Photo: Stuart Darsch https://news.mit.edu/2020/innovations-environmental-training-mining-industry-0707 MIT Environmental Solutions Initiative and multinational mining company Vale bring sustainability education to young engineering professionals in Brazil. Tue, 07 Jul 2020 14:15:00 -0400 https://news.mit.edu/2020/innovations-environmental-training-mining-industry-0707 Aaron Krol | Environmental Solutions Initiative <p>For the mining industry, efforts to achieve sustainability are moving from local to global. In the past, mining companies focused sustainability initiatives more on their social license to operate — treating workers fairly and operating safe and healthy facilities. However, concerns over climate change have put mining operations and supply chains in the global spotlight, leading to various carbon-neutral promises by mining companies in recent months.</p> <p>Heading in this direction is <a href=”http://www.vale.com/en/Pages/default.aspx”>Vale</a>, a global mining company and the world’s largest iron ore and nickel producer. It is a publicly traded company headquartered in Brazil with operations in 30 countries. In the wake of two major tailings dam failures, as well as continued pressure to reduce carbon emissions, <a href=”https://www.reuters.com/article/us-vale-sa-emissions/brazil-miner-vale-to-spend-2-billion-to-cut-carbon-emissions-33-by-2030-idUSKBN22P06C”>Vale has committed to spend $2 billion</a> to cut both its direct and indirect carbon emissions 33 percent by 2030. To meet these ambitions, a broad cultural change is required — and MIT is one of the partners invited by Vale to help with the challenge.</p> <p>Stephen Potter, global strategy director for Vale, knows that local understanding of sustainability is fundamental to reaching its goals. “We need to attract the best and brightest young people to work in the Brazilian mining sector, and young people want to work for companies with a strong sustainability program,” Potter says.</p> <p>To that end, Vale created the Mining Innovation in a New Environment (MINE) program in 2019, in collaboration with the MIT Environmental Solutions Initiative (ESI); the Imperial College London Consultants; The Bakery, a start-up accelerator; and SENAI CIMATEC, a Brazilian technical institute. The program provides classes and sustainability training to young professionals with degrees relevant to mining engineering. Students in the MINE program get hands-on experience working with a real challenge the company is facing, while also expanding their personal leadership and technical skills. “Instilling young people with an entrepreneurial and innovative mindset is a core tenet of this program, whether they ultimately work at Vale or elsewhere,” says Potter.</p> <p>ESI’s role in the MINE program is to provide expert perspectives on sustainability that students wouldn’t receive in ordinary engineering training courses. “MIT offers a unique blend of scientific and engineering expertise, as well as entrepreneurial spirit, that can inspire young professionals in the Brazilian mining sector to work toward sustainable practices,” says ESI Director John Fernández. Drawing on a deep, multidisciplinary portfolio of MIT research on the extraction and processing of metals and minerals, MIT can support the deployment of innovative technologies and environmentally and socially conscious business strategies throughout a global supply chain.</p> <p>Since December 2019, the inaugural class of 30 MINE students has had a whirlwind of experiences. To kick off the program, MIT offered six weeks of online training, building up to an immersive training session in Janary 2020. Hosted by SENAI CIMATEC at their academic campus in Salvador, Brazil, the event featured in-person sessions with five MIT faculty: professsors Jessika Trancik, Roberto Rigobon, Andrew Whittle, Rafi Segal, and Principal Research Scientist Randolph Kirchain.</p> <p>The two-week event was coordinated by Suzanne Greene, who leads the MINE program for ESI as part of her role with the MIT <a href=”http://sustainable.mit.edu/”>Sustainable Supply Chains</a> program. “What I loved about this program,” Greene says, “was the breadth of topics MIT’s lecturers were able to offer students. Students could take a deep dive on clean energy technology one day and tailings dams the next.”</p> <p>The courses were designed to give the students a common grounding in sustainability concepts and management tools to prepare them for the next phase of the program, &nbsp;a hands-on research project within Vale. Immersion projects in this next phase align Vale’s core sustainability strategies around worker and infrastructure safety and the low-carbon energy transition.</p> <p>“This project is a great opportunity for Vale to reconfigure their supply chain and also improve the social and environmental performance,” says Marina Mattos, a postdoc working with ESI in the <a href=”https://environmentalsolutions.mit.edu/metals-minerals-the-environment-program/”>Metals, Minerals, and the Environment</a> program. “As a Brazilian, I’m thrilled to be part of the MIT team helping to develop next-generation engineers with the values, attitudes, and skills necessary to understand and address challenges of the mining industry.”</p> <p>“We expect this program will lead to interest from other extractive companies, not only for education, but for research as well,” adds Greene. “This is just the beginning.”</p> MINE Program students and other program participants at a hackathon in Salvador, Brazil, are pictured here before the Covid-19 pandemic interrupted such gatherings. https://news.mit.edu/2020/d-lab-moves-online-without-compromising-impact-0701 With the campus shut down by Covid-19, the spring D-Lab class Water, Climate Change, and Health had to adapt. Wed, 01 Jul 2020 12:05:01 -0400 https://news.mit.edu/2020/d-lab-moves-online-without-compromising-impact-0701 Jessie Hendricks | Environmental Solutions Initiative <p>It’s not a typical sentence you’d find on a class schedule, but on April 2, the first action item for one MIT course read: “Check in on each other’s health and well-being.” The revised schedule was for Susan Murcott and Julie Simpson’s spring D-Lab class&nbsp;EC.719 / EC.789 (Water, Climate Change, and Health), just one of hundreds of classes at MIT that had to change course after the novel coronavirus sparked a campus-wide shutdown.</p> <p><strong>D-Lab at home</strong></p> <p>The dust had only begun to settle two weeks later, after a week of canceled classes followed by the established spring break, when students and professors reconvened in their new virtual classrooms. In Murcott and Simpson’s three-hour, once-a-week D-Lab class, the 20 students had completed only half of the subject’s 12 classes before the campus shut down. Those who could attend the six remaining classes would do so remotely for the first time in the five-year history of the class.</p> <p>Typically, students would have gathered at D-Lab, an international design and development center next to the MIT Museum on Massachusetts Avenue in Cambridge, Massachusetts. Within the center, D-Lab provides project-based and hands-on learning for undergraduate and graduate students in collaboration with international non-governmental organizations, governments, and industry. Many of the projects involve design solutions in low-income countries around the world. Murcott, an MIT lecturer who has worked with low-income populations for over 30 years in 25 countries, including Nepal and Ghana, was a natural fit to teach the class.</p> <p>Murcott’s background is in civil and environmental engineering, wastewater management, and climate. Her co-teacher, Research Engineer Julie Simpson of the Sea Grant College Program, has a PhD in coastal and marine ecology and a strong climate background. “It’s typical to find courses in climate change and energy, climate change and policy, or maybe climate change and human behavior,” Murcott says. But when she first began planning her D-Lab subject, there were no classes one could find anywhere in the world that married climate change and water.&nbsp;</p> <p>Murcott and Simpson refer to the class as transdisciplinary. “[Transdisciplinary] is about having as broad a sample of humanity as you can teaching and learning together on the topics that you care about,” Murcott says. But transdisciplinary also means attracting a wide range of students from various walks of life, studying a variety of subjects. This spring, Murcott and Simpson’s class had undergraduates, graduate students, and young professionals from MIT, Wellesley College, and Harvard University, studying architecture, chemistry, mechanical engineering, biochemistry, microbiology, computer science, math, food and agriculture, law, and public health, plus a Knight Science Journalism at MIT Fellow.</p> <p>After campus closed, these students scattered to locations across the country and the world, including France, Hong Kong, Rwanda, and South Korea. Student Sun Kim sent a five-page document with pictures to the class after returning to her home in South Korea, detailing her arrival in a Covid-19 world. Kim was tested in the airport after landing, given free room and board in a nearby hotel until she received her result (a “negative” result came back within eight hours), and quarantined in her parents’ house for two weeks, just in case she had picked up the virus during her travels. “I have been enjoying my Zoom classes during the wee hours of the night and sleeping during the day — ignoring the sunlight and pretending I am still in the U.S.,” Kim wrote.</p> <p><strong>Future generation climate action plans </strong></p> <p>Usually, the class has three or four field trips over the course of the semester, to places like the Blue Hill Meteorological Observatory, home of the longest climate record in the United States, and the Charles River Dam Infrastructure, which helps control flooding along Memorial Drive. With these physical trips closed off during the pandemic, Murcott and Simpson had to find new virtual spaces in which to convene. Four student teams took part in a climate change simulation using a program developed by Climate Interactive called <a href=”https://www.climateinteractive.org/tools/en-roads/”>En-ROADS</a>, in which they were challenged to create scenarios that aimed for a limit of 1.5 degree Celsius global average temperature rise above pre-industrial levels set out in the 2015 Paris Agreement. Each team developed unique scenarios and managed to reach that target by adjusting energy options, agricultural and land-use practices, economic levers, and policy options.</p> <p>The teams then used their En-ROADS scenario planning findings to evaluate the climate action plans of Cambridge, Boston, and Massachusetts, with virtual visits from experts on the plans. They also evaluated MIT’s climate plan, which was written in 2015 and which will be updated by the end of this year. Students found that MIT has one of the least-ambitious targets for reducing its greenhouse gas emissions compared to other institutions that the D-Lab class reviewed. Teams of students were then challenged to improve upon what MIT had done to date by coming up with their own future generation climate action plans. “I wanted them to find their voice,” says Murcott. As the co-chair of MIT’s Water Sustainability Working Group, an official committee designated to come up with a water plan for MIT, Murcott and Simpson are now working with a subset of eight students from the class over the summer, together with the MIT Environmental Solutions Initiative, the MIT Office of Sustainability, and the Office of the Vice President for Research, to collaborate on a new water and climate action plan.</p> <p><strong>Final projects</strong></p> <p>The spring 2020 D-Lab final presentations were as diverse as the students’ fields of study. Over two Zoom sessions, teams and individual students presented a total of eight final projects.</p> <p>The first project aimed to lower the number of Covid-19 transmissions among Cambridge residents and update access to food programs in light of the pandemic. At the time of the presentation, Massachusetts had the third-highest reported number of cases of the new coronavirus. Students reviewed what was already being done in Cambridge and expanded on that with recommendations such as an assistive phone line for sick residents, an N95 mask exchange program, increased transportation for medical care, and lodging options for positive cases to prevent household transmission. Another team working on the Covid-19 project presented their recommendations to update the city’s food policy. They suggested programs to increase awareness of the Supplemental Nutrition Assistance Program (SNAP) and the Women, Infants, and Children program (WIC) through municipal mailings, help vendors at farmers markets enroll in SNAP/EBT so that users could purchase local produce and goods, and promote local community gardens to help with future food security.</p> <p>Another project proposed an extensive rainwater harvesting project for the Memorial Drive dormitories, which also have a high photovoltaic potential, in which the nearby MIT recreational fields would benefit from self-sufficient rainwater irrigation driven by a solar-powered pump. Another student developed a machine learning method to count and detect river herrings that migrate into Boston each year by training a computer program to identify the fish using existing cameras installed by fish ladders.&nbsp;</p> <p>Student Lowry Yankwich wrote a long-form science journalism piece about the effect of climate change on local fisheries, and a team of three students created a six-unit climate change course called “Surviving and Thriving in the 21st Century” for upper-high-school to first-year college students</p> <p>Two global water projects were presented. In the first, student Ade Dapo-Famodu’s study compared a newly manufactured water test, the ECC Vial, to other leading global products that measure two major indicators of contaminated water: <em>E. coli</em> and coliforms. The second global water project was the Butaro Water Project team with Carene Umubyeyi and Naomi Lutz. Their project is a collaboration between faculty and students at MIT, Tufts University, University of Rwanda and University of Global Health Equity in Butaro, a small district in the northern part of Rwanda, where a number of villages lack access to safe drinking water.</p> <p><strong>The end is just the beginning</strong></p> <p>For many, the D-Lab projects aren’t just a semester-long endeavor. It’s typical for some D-Lab term projects to turn into either a January Independent Activities Period or a summer research or field project. Of the 20 students in the class, 10 are continuing to work on their term projects over the summer. Umubyeyi is Rwandan. Having returned home after the MIT shutdown, she will be coordinating the team’s design and construction of the village water system over the summer, with technical support from her teammate, Lutz, remotely from Illinois.</p> <p>The Future Generations Climate Action Planning process resulted in five students eager to take the D-Lab class work forward. They will be working with Jim Gomes, senior advisor in the Office of the Vice President, who is responsible for coordination MIT’s 2020 Climate Action Plan, together with one other student intern, Grace Moore.</p> <p>The six-unit online course for teens, Surviving and Thriving in the 21st Century, is being taught by Clara Gervaise-Volaire and Gabby Cazares and will be live through July 3. Continued policy work on Covid-19 will continue with contacts in the Cambridge City Council. Finally, Lowry will be sending out his full-length article for publication and starting his next piece. &nbsp;</p> <p>“Students have done so well in the face of the MIT shutdown and coronavirus pandemic challenge,” says Murcott. “Scattered around the country and around the world, they have come together through this online D-Lab class to embrace MIT’s mission of ‘creating a better world.’ In the process, they have deepened themselves and are actively serving others in the process. What could be better in these hard times?”</p> <p></p> Lecturer Susan Murcott met many members of her EC.719 / EC.789 (Water, Climate Change, and Health) D-Lab class for the first time at the Boston climate strike on Sept. 20, 2019. Photo: Susan Murcott https://news.mit.edu/2020/ideastream-showcases-breakthrough-technologies-across-mit-0624 From machine learning to devices to Covid-19 testing, Deshpande Center projects aim to make a positive impact on the world. Wed, 24 Jun 2020 14:40:01 -0400 https://news.mit.edu/2020/ideastream-showcases-breakthrough-technologies-across-mit-0624 Deshpande Center for Technological Innovation <p>MIT’s <a href=”http://deshpande.mit.edu/”>Deshpande Center for Technological Innovation</a> hosted IdeaStream, an annual showcase of technologies being developed across MIT, online for the first time in the event’s 18-year history. Last month, more than 500 people worldwide tuned in each day to view the breakthrough research and to chat with the researchers.</p> <p>Speakers from 19 MIT teams that received Deshpande grants presented their work, from learned control of manufacturing processes by Professor Brian Anthony, to what Hyunwoo Yuk of the Xuanhe Zhao Lab colloquially calls “surgical duct tape,” to artificial axons as a myelination assay for drug screening in neurological diseases by Anna Jagielska, a postdoc in the Krystyn Van Vliet Laboratory for Material Chemomechanics.</p> <p>“Innovation at MIT never stops,” said Deshpande Center Faculty Director Timothy Swager in a welcome address. He underscored how essential it was to keep innovation going, saying the innovation at the heart of the Deshpande Center’s current and future spinout companies will become part of essential businesses to aid in the pandemic.</p> <p>“These will be key for us … to emerge as a stronger ecosystem, both locally and globally, with different types of innovation,” he said.</p> <p><strong>A virtual format includes far-flung attendees</strong></p> <p>IdeaStream is known not only as an exhibition of MIT projects, but as a bridge between academic research and the business community where researchers, faculty, investors, and industry leaders meet and build connections. This year physical distancing prevented in-person meetings, but the conference’s virtual format facilitated thoughtful discussion and introductions and extended IdeaStream’s reach.</p> <p>Attendees from Greater Boston and as far as Ireland, India, Cyprus, Australia, and Brazil engaged with IdeaStream speakers in Zoom breakout sessions following the presentations. Jérôme Michon, who presented the Juejun Hu Research Group’s work on on-chip raman spectroscopic sensors for chemical and biological sensing, conducted his breakout session from France, to where he returned as MIT closed its campus.</p> <p><strong>Technology for a better world</strong></p> <p>Many of the presenters said their projects would have a positive impact on the environment or society. Svetlana Boriskina of the Department of Mechanical Engineering said clothing production is one of the world’s biggest polluters, requiring large amounts of energy and water and emitting greenhouse gases. Her SmartPE fabrics use far less water in production and are made from sustainable materials. The polyethylene fabrics are also antimicrobial, stain-resistant, and wick away body moisture.</p> <p>Postdoc Francesco Benedetti likewise pointed to the massive energy used for gas separation and purification in the chemical industry, and posed his team’s project as a cleaner alternative. In a collaboration of the Zachary Smith Lab and the Swager Group, they are creating membranes from polymers with a flexible backbone connected to tunable, rigid, pore-generating side chains. They require no heat or toxic solvents for separation, and could replace distillation, saving significant amounts of energy.</p> <p>Another team has adapted its project to aid in the coronavirus response. Postdoc Eric Miller said that prior to the pandemic, the Hadley D. Sikes Lab had developed immunoassays using engineered binding proteins that successfully identified markers for malaria, tuberculosis, and dengue. Now they have applied that technology to develop a rapid Covid-19 diagnostic test. The paper-based tests would be easily administered by anyone, with results expected within 10 minutes.</p> <p>Some of the projects addressed food and water challenges and were sponsored by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Maxwell Robinson, a postdoc in the Karen Gleason Lab, presented an early detection system for Huanglongbing, or citrus greening disease. The incurable disease has cost Florida’s citrus industry $3 billion, and now threatens California’s $3.3 billion industry. The system would identify affected trees using sensors attuned to volatile organic compounds emitted by citrus trees. These compounds change in concentration during early-stage infection, when trees show no visible symptoms.</p> <p>In another J-WAFS project, Professor Kripa Varanasi of the Department of Mechanical Engineering built on a statistic — only about 2 percent of applied pesticides adhere to their targets — and demonstrated how the drops he formulated help pesticides better adhere to leaves. The result is a drastically lower volume of agricultural spray needed for application, and an overall reduction in chemical runoff.</p> <p><strong>Taking a chance on early-stage research</strong></p> <p>Innovation does not come without risk, said Deshpande Center Executive Director Leon Sandler following the event. Much university research is not ready to spin out into companies, and investors won’t put money into it because it’s still too risky.</p> <p>“By supporting early-stage research with funding, connecting researchers with deep-domain experts who can help shape it, maturing it to a point where it starts to be an attractive investment, you give it a chance to spin out,” he said. “You give it a chance to commercialize and make an impact on the world.</p> <p>All the presentations may be viewed on the <a href=”http://deshpande.mit.edu/news/ideastream/2020″>Deshpande Center’s website</a>.</p> IdeaStream 2020 featured presentations from 19 research projects across MIT, as well as Q&A sessions with speakers via Zoom. Image: Shirley Goh/Deshpande Center for Technological Innovation https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Over a seven-year period, decline in PV costs outpaced decline in value; by 2017, market, health, and climate benefits outweighed the cost of PV systems. Tue, 23 Jun 2020 14:15:01 -0400 https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Nancy Stauffer | MIT Energy Initiative <p>Over the past decade,&nbsp;the cost of solar photovoltaic (PV) arrays has fallen rapidly. But at the same time, the value of PV power has declined in areas that have installed significant PV generating capacity. Operators of utility-scale PV systems have seen electricity prices drop as more PV generators come online. Over the same time period, many coal-fired power plants were required to install emissions-control systems, resulting in declines in air pollution nationally and regionally. The result has been improved public health — but also a decrease in the potential health benefits from offsetting coal generation with PV generation.</p> <p>Given those competing trends, do the benefits of PV generation outweigh the costs? Answering that question requires balancing the up-front capital costs against the lifetime benefits of a PV system. Determining the former is fairly straightforward. But assessing the latter is challenging because the benefits differ across time and place. “The differences aren’t just due to variation in the amount of sunlight a given location receives throughout the year,” says&nbsp;<a href=”http://energy.mit.edu/profile/patrick-brown/”>Patrick R. Brown</a> PhD ’16, a postdoc at the MIT Energy Initiative. “They’re also due to variability in electricity prices and pollutant emissions.”</p> <p>The drop in the price paid for utility-scale PV power stems in part from how electricity is bought and sold on wholesale electricity markets. On the “day-ahead” market, generators and customers submit bids specifying how much they’ll sell or buy at various price levels at a given hour on the following day. The lowest-cost generators are chosen first. Since the variable operating cost of PV systems is near zero, they’re almost always chosen, taking the place of the most expensive generator then in the lineup. The price paid to every selected generator is set by the highest-cost operator on the system, so as more PV power comes on, more high-cost generators come off, and the price drops for everyone. As a result, in the middle of the day, when solar is generating the most, prices paid to electricity generators are at their lowest.</p> <p>Brown notes that some generators may even bid negative prices. “They’re effectively paying consumers to take their power to ensure that they are dispatched,” he explains. For example, inflexible coal and nuclear plants may bid negative prices to avoid frequent shutdown and startup events that would result in extra fuel and maintenance costs. Renewable generators may also bid negative prices to obtain larger subsidies that are rewarded based on production.&nbsp;</p> <p>Health benefits also differ over time and place. The health effects of deploying PV power are greater in a heavily populated area that relies on coal power than in a less-populated region that has access to plenty of clean hydropower or wind. And the local health benefits of PV power can be higher when there’s congestion on transmission lines that leaves a region stuck with whatever high-polluting sources are available nearby. The social costs of air pollution are largely “externalized,” that is, they are mostly unaccounted for in electricity markets. But they can be quantified using statistical methods, so health benefits resulting from reduced emissions can be incorporated when assessing the cost-competitiveness of PV generation.</p> <p>The contribution of fossil-fueled generators to climate change is another externality not accounted for by most electricity markets. Some U.S. markets, particularly in California and the Northeast, have implemented cap-and-trade programs, but the carbon dioxide (CO<sub>2</sub>) prices in those markets are much lower than estimates of the social cost of CO<sub>2</sub>, and other markets don’t price carbon at all. A full accounting of the benefits of PV power thus requires determining the CO<sub>2</sub>&nbsp;emissions displaced by PV generation and then multiplying that value by a uniform carbon price representing the damage that those emissions would have caused.</p> <p><strong>Calculating PV costs and benefits</strong></p> <p>To examine the changing value of solar power, Brown and his colleague Francis M. O’Sullivan, the senior vice president of strategy at Ørsted Onshore North America and a senior lecturer at the MIT Sloan School of Management, developed a methodology to assess the costs and benefits of PV power across the U.S. power grid annually from 2010 to 2017.&nbsp;</p> <p>The researchers focused on six “independent system operators” (ISOs) in California, Texas, the Midwest, the Mid-Atlantic, New York, and New England. Each ISO sets electricity prices at hundreds of “pricing nodes” along the transmission network in their region. The researchers performed analyses at more than 10,000 of those pricing nodes.</p> <p>For each node, they simulated the operation of a utility-scale PV array that tilts to follow the sun throughout the day. They calculated how much electricity it would generate and the benefits that each kilowatt would provide, factoring in energy and “capacity” revenues as well as avoided health and climate change costs associated with the displacement of fossil fuel emissions. (Capacity revenues are paid to generators for being available to deliver electricity at times of peak demand.) They focused on emissions of CO<sub>2</sub>, which contributes to climate change, and of nitrogen oxides (NO<sub>x</sub>), sulfur dioxide (SO<sub>2</sub>), and particulate matter called PM<sub>2.5</sub> — fine particles that can cause serious health problems and can be emitted or formed in the atmosphere from NO<sub>x</sub>&nbsp;and SO<sub>2</sub>.</p> <p>The results of the analysis showed that the wholesale energy value of PV generation varied significantly from place to place, even within the region of a given ISO. For example, in New York City and Long Island, where population density is high and adding transmission lines is difficult, the market value of solar was at times 50 percent higher than across the state as a whole.&nbsp;</p> <p>The public health benefits associated with SO<sub>2</sub>, NO<sub>x</sub>, and PM<sub>2.5</sub>&nbsp;emissions reductions declined over the study period but were still substantial in 2017. Monetizing the health benefits of PV generation in 2017 would add almost 75 percent to energy revenues in the Midwest and New York and fully 100 percent in the Mid-Atlantic, thanks to the large amount of coal generation in the Midwest and Mid-Atlantic and the high population density on the Eastern Seaboard.&nbsp;</p> <p>Based on the calculated energy and capacity revenues and health and climate benefits for 2017, the researchers asked: Given that combination of private and public benefits, what upfront PV system cost would be needed to make the PV installation “break even” over its lifetime, assuming that grid conditions in that year persist for the life of the installation? In other words, says Brown, “At what capital cost would an investment in a PV system be paid back in benefits over the lifetime of the array?”&nbsp;</p> <p>Assuming 2017 values for energy and capacity market revenues alone, an unsubsidized PV investment at 2017 costs doesn’t break even. Add in the health benefit, and PV breaks even at 30 percent of the pricing nodes modeled. Assuming a carbon price of $50 per ton, the investment breaks even at about 70 percent of the nodes, and with a carbon price of $100 per ton (which is still less than the price estimated to be needed to limit global temperature rise to under 2 degrees Celsius), PV breaks even at all of the modeled nodes.&nbsp;</p> <p>That wasn’t the case just two years earlier: At 2015 PV costs, PV would only have broken even in 2017 at about 65 percent of the nodes counting market revenues, health benefits, and a $100 per ton carbon price. “Since 2010, solar has gone from one of the most expensive sources of electricity to one of the cheapest, and it now breaks even across the majority of the U.S. when considering the full slate of values that it provides,” says Brown.&nbsp;</p> <p>Based on their findings, the researchers conclude that the decline in PV costs over the studied period outpaced the decline in value, such that in 2017 the market, health, and climate benefits outweighed the cost of PV systems at the majority of locations modeled. “So the amount of solar that’s competitive is still increasing year by year,” says Brown.&nbsp;</p> <p>The findings underscore the importance of considering health and climate benefits as well as market revenues. “If you’re going to add another megawatt of PV power, it’s best to put it where it’ll make the most difference, not only in terms of revenues but also health and CO<sub>2</sub>,” says Brown.&nbsp;</p> <p>Unfortunately, today’s policies don’t reward that behavior. Some states do provide renewable energy subsidies for solar investments, but they reward generation equally everywhere. Yet in states such as New York, the public health benefits would have been far higher at some nodes than at others. State-level or regional reward mechanisms could be tailored to reflect such variation in node-to-node benefits of PV generation, providing incentives for installing PV systems where they’ll be most valuable. Providing time-varying price signals (including the cost of emissions) not only to utility-scale generators, but also to residential and commercial electricity generators and customers, would similarly guide PV investment to areas where it provides the most benefit.&nbsp;</p> <p><strong>Time-shifting PV output to maximize revenues&nbsp;</strong></p> <p>The analysis provides some guidance that might help would-be PV installers maximize their revenues. For example, it identifies certain “hot spots” where PV generation is especially valuable. At some high-electricity-demand nodes along the East Coast, for instance, persistent grid congestion has meant that the projected revenue of a PV generator has been high for more than a decade. The analysis also shows that the sunniest site may not always be the most profitable choice. A PV system in Texas would generate about 20 percent more power than one in the Northeast, yet energy revenues were greater at nodes in the Northeast than in Texas in some of the years analyzed.&nbsp;</p> <p>To help potential PV owners maximize their future revenues, Brown and O’Sullivan performed a follow-on study focusing on ways to shift the output of PV arrays to align with times of higher prices on the wholesale market. For this analysis, they considered the value of solar on the day-ahead market and also on the “real-time market,” which dispatches generators to correct for discrepancies between supply and demand. They explored three options for shaping the output of PV generators, with a focus on the California real-time market in 2017, when high PV penetration led to a large reduction in midday prices compared to morning and evening prices.</p> <ul> <li>Curtailing output when prices are negative:&nbsp;During negative-price hours,&nbsp;a PV operator can simply turn off generation. In California in 2017, curtailment would have increased revenues by 9 percent on the real-time market compared to “must-run” operation.</li> <li>Changing the orientation of “fixed-tilt” (stationary) solar panels:&nbsp;The general rule of thumb in the Northern Hemisphere is to orient solar panels toward the south, maximizing production over the year. But peak production then occurs at about noon, when electricity prices in markets with high solar penetration are at their lowest. Pointing panels toward the west moves generation further into the afternoon. On the California real-time market in 2017, optimizing the orientation would have increased revenues by 13 percent, or 20 percent in conjunction with curtailment.</li> <li>Using 1-axis tracking:&nbsp;For larger utility-scale installations, solar panels are frequently installed on automatic solar trackers, rotating throughout the day from east in the morning to west in the evening. Using such 1-axis tracking on the California system in 2017 would have increased revenues by 32 percent over a fixed-tilt installation, and using tracking plus curtailment would have increased revenues by 42 percent.</li> </ul> <p>The researchers were surprised to see how much the optimal orientation changed in California over the period of their study. “In 2010, the best orientation for a fixed array was about 10 degrees west of south,” says Brown. “In 2017, it’s about 55 degrees west of south.” That adjustment is due to changes in market prices that accompany significant growth in PV generation — changes that will occur in other regions as they start to ramp up their solar generation.</p> <p>The researchers stress that conditions are constantly changing on power grids and electricity markets. With that in mind, they made their database and computer code openly available so that others can readily use them to calculate updated estimates of the net benefits of PV power and other distributed energy resources.</p> <p>They also emphasize the importance of getting time-varying prices to all market participants and of adapting installation and dispatch strategies to changing power system conditions. A law set to take effect in California in 2020 will require all new homes to have solar panels. Installing the usual south-facing panels with uncurtailable output could further saturate the electricity market at times when other PV installations are already generating.</p> <p>“If new rooftop arrays instead use west-facing panels that can be switched off during negative price times, it’s better for the whole system,” says Brown. “Rather than just adding more solar at times when the price is already low and the electricity mix is already clean, the new PV installations would displace expensive and dirty gas generators in the evening. Enabling that outcome is a win all around.”</p> <p>Patrick Brown and this research were supported by a U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Postdoctoral Research Award through the EERE Solar Energy Technologies Office. The computer code and data repositories are available <a href=”https://zenodo.org/record/3562896#.XrQ74RNKg_U”>here</a> and <a href=”https://zenodo.org/record/3368397#.XrQ79BNKg_U”>here</a>.</p> <p><em>This article appears in the&nbsp;<a href=”http://energy.mit.edu/energy-futures/spring-2020/” target=”_blank”>Spring 2020</a>&nbsp;issue&nbsp;of&nbsp;</em>Energy Futures,<em> the magazine of the MIT Energy Initiative.&nbsp;</em></p> Utility-scale photovoltaic arrays are an economic investment across most of the United States when health and climate benefits are taken into account, concludes an analysis by MITEI postdoc Patrick Brown and Senior Lecturer Francis O’Sullivan. Their results show the importance of providing accurate price signals to generators and consumers and of adopting policies that reward installation of solar arrays where they will bring the most benefit. Photo courtesy of SunEnergy1. https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Over a seven-year period, decline in PV costs outpaced decline in value; by 2017, market, health, and climate benefits outweighed the cost of PV systems. Tue, 23 Jun 2020 14:15:01 -0400 https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Nancy Stauffer | MIT Energy Initiative <p>Over the past decade,&nbsp;the cost of solar photovoltaic (PV) arrays has fallen rapidly. But at the same time, the value of PV power has declined in areas that have installed significant PV generating capacity. Operators of utility-scale PV systems have seen electricity prices drop as more PV generators come online. Over the same time period, many coal-fired power plants were required to install emissions-control systems, resulting in declines in air pollution nationally and regionally. The result has been improved public health — but also a decrease in the potential health benefits from offsetting coal generation with PV generation.</p> <p>Given those competing trends, do the benefits of PV generation outweigh the costs? Answering that question requires balancing the up-front capital costs against the lifetime benefits of a PV system. Determining the former is fairly straightforward. But assessing the latter is challenging because the benefits differ across time and place. “The differences aren’t just due to variation in the amount of sunlight a given location receives throughout the year,” says&nbsp;<a href=”http://energy.mit.edu/profile/patrick-brown/”>Patrick R. Brown</a> PhD ’16, a postdoc at the MIT Energy Initiative. “They’re also due to variability in electricity prices and pollutant emissions.”</p> <p>The drop in the price paid for utility-scale PV power stems in part from how electricity is bought and sold on wholesale electricity markets. On the “day-ahead” market, generators and customers submit bids specifying how much they’ll sell or buy at various price levels at a given hour on the following day. The lowest-cost generators are chosen first. Since the variable operating cost of PV systems is near zero, they’re almost always chosen, taking the place of the most expensive generator then in the lineup. The price paid to every selected generator is set by the highest-cost operator on the system, so as more PV power comes on, more high-cost generators come off, and the price drops for everyone. As a result, in the middle of the day, when solar is generating the most, prices paid to electricity generators are at their lowest.</p> <p>Brown notes that some generators may even bid negative prices. “They’re effectively paying consumers to take their power to ensure that they are dispatched,” he explains. For example, inflexible coal and nuclear plants may bid negative prices to avoid frequent shutdown and startup events that would result in extra fuel and maintenance costs. Renewable generators may also bid negative prices to obtain larger subsidies that are rewarded based on production.&nbsp;</p> <p>Health benefits also differ over time and place. The health effects of deploying PV power are greater in a heavily populated area that relies on coal power than in a less-populated region that has access to plenty of clean hydropower or wind. And the local health benefits of PV power can be higher when there’s congestion on transmission lines that leaves a region stuck with whatever high-polluting sources are available nearby. The social costs of air pollution are largely “externalized,” that is, they are mostly unaccounted for in electricity markets. But they can be quantified using statistical methods, so health benefits resulting from reduced emissions can be incorporated when assessing the cost-competitiveness of PV generation.</p> <p>The contribution of fossil-fueled generators to climate change is another externality not accounted for by most electricity markets. Some U.S. markets, particularly in California and the Northeast, have implemented cap-and-trade programs, but the carbon dioxide (CO<sub>2</sub>) prices in those markets are much lower than estimates of the social cost of CO<sub>2</sub>, and other markets don’t price carbon at all. A full accounting of the benefits of PV power thus requires determining the CO<sub>2</sub>&nbsp;emissions displaced by PV generation and then multiplying that value by a uniform carbon price representing the damage that those emissions would have caused.</p> <p><strong>Calculating PV costs and benefits</strong></p> <p>To examine the changing value of solar power, Brown and his colleague Francis M. O’Sullivan, the senior vice president of strategy at Ørsted Onshore North America and a senior lecturer at the MIT Sloan School of Management, developed a methodology to assess the costs and benefits of PV power across the U.S. power grid annually from 2010 to 2017.&nbsp;</p> <p>The researchers focused on six “independent system operators” (ISOs) in California, Texas, the Midwest, the Mid-Atlantic, New York, and New England. Each ISO sets electricity prices at hundreds of “pricing nodes” along the transmission network in their region. The researchers performed analyses at more than 10,000 of those pricing nodes.</p> <p>For each node, they simulated the operation of a utility-scale PV array that tilts to follow the sun throughout the day. They calculated how much electricity it would generate and the benefits that each kilowatt would provide, factoring in energy and “capacity” revenues as well as avoided health and climate change costs associated with the displacement of fossil fuel emissions. (Capacity revenues are paid to generators for being available to deliver electricity at times of peak demand.) They focused on emissions of CO<sub>2</sub>, which contributes to climate change, and of nitrogen oxides (NO<sub>x</sub>), sulfur dioxide (SO<sub>2</sub>), and particulate matter called PM<sub>2.5</sub> — fine particles that can cause serious health problems and can be emitted or formed in the atmosphere from NO<sub>x</sub>&nbsp;and SO<sub>2</sub>.</p> <p>The results of the analysis showed that the wholesale energy value of PV generation varied significantly from place to place, even within the region of a given ISO. For example, in New York City and Long Island, where population density is high and adding transmission lines is difficult, the market value of solar was at times 50 percent higher than across the state as a whole.&nbsp;</p> <p>The public health benefits associated with SO<sub>2</sub>, NO<sub>x</sub>, and PM<sub>2.5</sub>&nbsp;emissions reductions declined over the study period but were still substantial in 2017. Monetizing the health benefits of PV generation in 2017 would add almost 75 percent to energy revenues in the Midwest and New York and fully 100 percent in the Mid-Atlantic, thanks to the large amount of coal generation in the Midwest and Mid-Atlantic and the high population density on the Eastern Seaboard.&nbsp;</p> <p>Based on the calculated energy and capacity revenues and health and climate benefits for 2017, the researchers asked: Given that combination of private and public benefits, what upfront PV system cost would be needed to make the PV installation “break even” over its lifetime, assuming that grid conditions in that year persist for the life of the installation? In other words, says Brown, “At what capital cost would an investment in a PV system be paid back in benefits over the lifetime of the array?”&nbsp;</p> <p>Assuming 2017 values for energy and capacity market revenues alone, an unsubsidized PV investment at 2017 costs doesn’t break even. Add in the health benefit, and PV breaks even at 30 percent of the pricing nodes modeled. Assuming a carbon price of $50 per ton, the investment breaks even at about 70 percent of the nodes, and with a carbon price of $100 per ton (which is still less than the price estimated to be needed to limit global temperature rise to under 2 degrees Celsius), PV breaks even at all of the modeled nodes.&nbsp;</p> <p>That wasn’t the case just two years earlier: At 2015 PV costs, PV would only have broken even in 2017 at about 65 percent of the nodes counting market revenues, health benefits, and a $100 per ton carbon price. “Since 2010, solar has gone from one of the most expensive sources of electricity to one of the cheapest, and it now breaks even across the majority of the U.S. when considering the full slate of values that it provides,” says Brown.&nbsp;</p> <p>Based on their findings, the researchers conclude that the decline in PV costs over the studied period outpaced the decline in value, such that in 2017 the market, health, and climate benefits outweighed the cost of PV systems at the majority of locations modeled. “So the amount of solar that’s competitive is still increasing year by year,” says Brown.&nbsp;</p> <p>The findings underscore the importance of considering health and climate benefits as well as market revenues. “If you’re going to add another megawatt of PV power, it’s best to put it where it’ll make the most difference, not only in terms of revenues but also health and CO<sub>2</sub>,” says Brown.&nbsp;</p> <p>Unfortunately, today’s policies don’t reward that behavior. Some states do provide renewable energy subsidies for solar investments, but they reward generation equally everywhere. Yet in states such as New York, the public health benefits would have been far higher at some nodes than at others. State-level or regional reward mechanisms could be tailored to reflect such variation in node-to-node benefits of PV generation, providing incentives for installing PV systems where they’ll be most valuable. Providing time-varying price signals (including the cost of emissions) not only to utility-scale generators, but also to residential and commercial electricity generators and customers, would similarly guide PV investment to areas where it provides the most benefit.&nbsp;</p> <p><strong>Time-shifting PV output to maximize revenues&nbsp;</strong></p> <p>The analysis provides some guidance that might help would-be PV installers maximize their revenues. For example, it identifies certain “hot spots” where PV generation is especially valuable. At some high-electricity-demand nodes along the East Coast, for instance, persistent grid congestion has meant that the projected revenue of a PV generator has been high for more than a decade. The analysis also shows that the sunniest site may not always be the most profitable choice. A PV system in Texas would generate about 20 percent more power than one in the Northeast, yet energy revenues were greater at nodes in the Northeast than in Texas in some of the years analyzed.&nbsp;</p> <p>To help potential PV owners maximize their future revenues, Brown and O’Sullivan performed a follow-on study focusing on ways to shift the output of PV arrays to align with times of higher prices on the wholesale market. For this analysis, they considered the value of solar on the day-ahead market and also on the “real-time market,” which dispatches generators to correct for discrepancies between supply and demand. They explored three options for shaping the output of PV generators, with a focus on the California real-time market in 2017, when high PV penetration led to a large reduction in midday prices compared to morning and evening prices.</p> <ul> <li>Curtailing output when prices are negative:&nbsp;During negative-price hours,&nbsp;a PV operator can simply turn off generation. In California in 2017, curtailment would have increased revenues by 9 percent on the real-time market compared to “must-run” operation.</li> <li>Changing the orientation of “fixed-tilt” (stationary) solar panels:&nbsp;The general rule of thumb in the Northern Hemisphere is to orient solar panels toward the south, maximizing production over the year. But peak production then occurs at about noon, when electricity prices in markets with high solar penetration are at their lowest. Pointing panels toward the west moves generation further into the afternoon. On the California real-time market in 2017, optimizing the orientation would have increased revenues by 13 percent, or 20 percent in conjunction with curtailment.</li> <li>Using 1-axis tracking:&nbsp;For larger utility-scale installations, solar panels are frequently installed on automatic solar trackers, rotating throughout the day from east in the morning to west in the evening. Using such 1-axis tracking on the California system in 2017 would have increased revenues by 32 percent over a fixed-tilt installation, and using tracking plus curtailment would have increased revenues by 42 percent.</li> </ul> <p>The researchers were surprised to see how much the optimal orientation changed in California over the period of their study. “In 2010, the best orientation for a fixed array was about 10 degrees west of south,” says Brown. “In 2017, it’s about 55 degrees west of south.” That adjustment is due to changes in market prices that accompany significant growth in PV generation — changes that will occur in other regions as they start to ramp up their solar generation.</p> <p>The researchers stress that conditions are constantly changing on power grids and electricity markets. With that in mind, they made their database and computer code openly available so that others can readily use them to calculate updated estimates of the net benefits of PV power and other distributed energy resources.</p> <p>They also emphasize the importance of getting time-varying prices to all market participants and of adapting installation and dispatch strategies to changing power system conditions. A law set to take effect in California in 2020 will require all new homes to have solar panels. Installing the usual south-facing panels with uncurtailable output could further saturate the electricity market at times when other PV installations are already generating.</p> <p>“If new rooftop arrays instead use west-facing panels that can be switched off during negative price times, it’s better for the whole system,” says Brown. “Rather than just adding more solar at times when the price is already low and the electricity mix is already clean, the new PV installations would displace expensive and dirty gas generators in the evening. Enabling that outcome is a win all around.”</p> <p>Patrick Brown and this research were supported by a U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Postdoctoral Research Award through the EERE Solar Energy Technologies Office. The computer code and data repositories are available <a href=”https://zenodo.org/record/3562896#.XrQ74RNKg_U”>here</a> and <a href=”https://zenodo.org/record/3368397#.XrQ79BNKg_U”>here</a>.</p> <p><em>This article appears in the&nbsp;<a href=”http://energy.mit.edu/energy-futures/spring-2020/” target=”_blank”>Spring 2020</a>&nbsp;issue&nbsp;of&nbsp;</em>Energy Futures,<em> the magazine of the MIT Energy Initiative.&nbsp;</em></p> Utility-scale photovoltaic arrays are an economic investment across most of the United States when health and climate benefits are taken into account, concludes an analysis by MITEI postdoc Patrick Brown and Senior Lecturer Francis O’Sullivan. Their results show the importance of providing accurate price signals to generators and consumers and of adopting policies that reward installation of solar arrays where they will bring the most benefit. Photo courtesy of SunEnergy1. https://news.mit.edu/2020/ice-ice-maybe-meghana-ranganathan-0617 EAPS graduate student Meghana Ranganathan zooms into the microstructure of ice streams to better understand the impacts of climate change. Wed, 17 Jun 2020 15:50:01 -0400 https://news.mit.edu/2020/ice-ice-maybe-meghana-ranganathan-0617 Laura Carter | School of Science <p>From above, Antarctica appears as a massive sheet of white. But if you were to zoom in, you would find that an ice sheet is a complex and dynamic system. In the <a href=”http://eapsweb.mit.edu”>Department of Earth, Atmospheric and Planetary Sciences</a> (EAPS), graduate student Meghana Ranganathan studies what controls the speed of ice streams — narrow, fast-flowing sections of the glacier that funnel into the ocean. When they meet the ocean, losing ground support, they calve and break off into icebergs. This is the fastest route of ice mass loss in a changing climate.</p> <p>Looking at the microstructure, there are many components that can affect the speed with which the ice flows, Ranganathan explains, including its interaction with the land the ice sits on, the crystalline structure of the ice, and the orientation and size of the grains of ice. And, unfortunately, many models do not take these minute factors into consideration, which can impact their predictions. That is what she hopes to improve, modifying the mathematics and building models that eliminate assumptions by fleshing out the details of exactly what is happening down to a microscopic level.</p> <p>Ranganathan is equipped to handle such a topic, holding a bachelor’s degree in mathematics from Swarthmore College, where she generated food chain models to investigate extinction levels. She left her undergraduate studies with a “desire to save the world” and knew she wanted to apply her knowledge to climate science for her graduate degree. “We’re one of the first generations that grew up hearing about the climate crisis, and I think that made quite an impact on me,” she says. It’s also a “sweet spot,” she claims, in terms of being both a scientifically invigorating problem — with a lot of mathematical complexities — and a societal issue: “My desire to use math to discover things about the world, and my desire to help the world intersect in climate science.”</p> <p><strong>A climate of opportunity</strong></p> <p>EAPS allowed Ranganathan the flexibility to choose her field of focus within the wide range of climate science. “EAPS is a great department in diversity of fields,” she says. “It’s rare for one department to encompass so many aspects of earth and planetary sciences.” She lists faculty addressing everything from hurricanes to climate variability to biological oceanography and even exoplanetary studies. “Even now that I’ve found a research focus, I get to learn about other fields and stay in touch with current research being done across the earth sciences,” she adds.</p> <p>Flexibility is something she also attributes to her fellowship. Currently, Ranganathan is sponsored by the Sven Treitel Fellowship, and it’s this support that has allowed her the opportunity to develop and grow her independence, transitioning from student to researcher. “Graduate school is arguably not necessarily to learn a field, but rather to learn how to build on your own ideas,” she explains. Without having her time consumed by writing grant proposals or working on other people’s funded projects, she can divert her full attention to the topic she chooses. “This fellowship has really enabled me to focus on what I’m here to do: learn to be a scientist.”</p> <p>The Sven Treitel Graduate Student Support Fund was established in 2016 by EAPS alumnus Arthur Cheng ScD ’78 to honor Sven Treitel ’53, SM ’55, PhD ’58. “Sven Treitel was a visiting professor at MIT when I was a graduate student, and he was a great role model for me,” says Cheng. Treitel’s contributions to making seismograms more accurate are considered instrumental to bringing about the “digital revolution” of seismology.</p> <p><strong>Years of change</strong></p> <p>Currently in her third year, Ranganathan has passed her qualifying exam and is now fully devoted to her project. That includes facing some challenges in her research, like producing new models or, at least, new additions to preexisting models to make them suitable for ice streams. She also worries about what she calls a dearth of data needed to provide her model some benchmarks. Her excitement isn’t deterred, though, and she’s invigorated by the prospect of self-directing how she tackles these technical obstacles with input from her advisor, Cecil and Ida Green Career Development Professor Brent Minchew.</p> <p>During the Covid-19 crisis, Ranganathan appreciates the EAPS department and her advisor for ensuring that events and check-ins remain a regular occurrence in addition to prioritizing mental health. Although she has adjusted her hours and workflow, Ranganathan believes she has been relatively lucky while MIT campus has limited access. “My work is quite easy to take remote, since it is entirely computer-based work. So, my days haven’t changed too much, with the exception of my physical location,” she notes. “The biggest trick I’ve learned is to be OK with everything not being exactly the same as it would have been if we were working in person.”</p> <p>Ranganathan still meets with her office mate every morning for coffee, albeit virtually, and continues to find encouragement in her fellow lab group-mates, whom she describes as smart, driven, and diverse, and brought together by a love for ice and glaciers. She considers the EAPS students in general a warming part of being at MIT. “They’re passionate and friendly. I love how active our students are in science communication, outreach, and climate activism,” she comments.</p> <p><strong>Ice sheets of paper</strong></p> <p>The co-president of the WiXII (Women in Course 12 group), Ranganathan is well-versed in communication and outreach herself. She enjoys writing — fiction as well as journalism — and has previously contributed articles to <em>Scientific American</em>. She uses her writing as a means to elevate awareness of climate issues and generally focuses on the interplay between climate and society. Her 2019 TEDx talk focused on human relationships with ice — how the last two decades of scientific study has completely changed how society understands ice sheets.</p> <p>Amazingly, all of Ranganathan’s knowledge of earth science, climate science, and glaciology, she has learned since joining MIT in 2017. “I never realized how much you learn so quickly in graduate school.” She hopes to continue down a similar track in her future career, addressing important aspects of glaciology that still need answers. She might want to try field work someday. When asked what’s left to accomplish, she joked, “Do the thesis! Write the thesis!”&nbsp;</p> EAPS graduate student Meghana Ranganathan studies glaciers to better calibrate climate models. Photo courtesy of Meghana Ranganathan. https://news.mit.edu/2020/why-mediterranean-climate-change-hotspot-0617 MIT analysis uncovers the basis of the severe rainfall declines predicted by many models. Wed, 17 Jun 2020 09:55:48 -0400 https://news.mit.edu/2020/why-mediterranean-climate-change-hotspot-0617 David L. Chandler | MIT News Office <p>Although global climate models vary in many ways, they agree on this: The Mediterranean region will be significantly drier in coming decades, potentially seeing 40 percent less precipitation during the winter rainy season.</p><p>An analysis by researchers at MIT has now found the underlying mechanisms that explain the anomalous effects in this region, especially in the Middle East and in northwest Africa. The analysis could help refine the models and add certainty to their projections, which have significant implications for the management of water resources and agriculture in the region.</p><p><a href=”https://journals.ametsoc.org/jcli/article/33/14/5829/347612/Why-Is-the-Mediterranean-a-Climate-Change-Hot-Spot” target=”_blank”>The study</a>, published last week in the <em>Journal of Climate</em>, was carried out by MIT graduate student Alexandre Tuel and professor of civil and environmental engineering Elfatih Eltahir.</p><p>The different global circulation models of the Earth’s changing climate agree that temperatures virtually everywhere will increase, and in most places so will rainfall, in part because warmer air can carry more water vapor. However, “There is one major exception, and that is the Mediterranean area,” Eltahir says, which shows the greatest decline of projected rainfall of any landmass on Earth.</p><p>“With all their differences, the models all seem to agree that this is going to happen,” he says, although they differ on the amount of the decline, ranging from 10 percent to 60 percent. But nobody had previously been able to explain why.</p><p>Tuel and Eltahir found that this projected drying of the Mediterranean region is a result of the confluence of two different effects of a warming climate: a change in the dynamics of upper atmosphere circulation and a reduction in the temperature difference between land and sea. Neither factor by itself would be sufficient to account for the anomalous reduction in rainfall, but in combination the two phenomena can fully account for the unique drying trend seen in the models.</p><p>The first effect is a large-scale phenomenon, related to powerful high-altitude winds called the midlatitude jet stream, which drive a strong, steady west-to-east weather pattern across Europe, Asia, and North America. Tuel says the models show that “one of the robust things that happens with climate change is that as you increase the global temperature, you’re going to increase the strength of these midlatitude jets.”</p><p>But in the Northern Hemisphere, those winds run into obstacles, with mountain ranges including the Rockies, Alps, and Himalayas, and these collectively impart a kind of wave pattern onto this steady circulation, resulting in alternating zones of higher and lower air pressure. High pressure is associated with clear, dry air, and low pressure with wetter air and storm systems. But as the air gets warmer, this wave pattern gets altered.</p><p>“It just happened that the geography of where the Mediterranean is, and where the mountains are, impacts the pattern of air flow high in the atmosphere in a way that creates a high pressure area over the Mediterranean,” Tuel explains. That high-pressure area creates a dry zone with little precipitation.</p><p>However, that effect alone can’t account for the projected Mediterranean drying. That requires the addition of a second mechanism, the reduction of the temperature difference between land and sea. That difference, which helps to drive winds<s>,</s> will also be greatly reduced by climate change, because the land is warming up much faster than the seas.</p><p>“What’s really different about the Mediterranean compared to other regions is the geography,” Tuel says. “Basically, you have a big sea enclosed by continents, which doesn’t really occur anywhere else in the world.” While models show the surrounding landmasses warming by 3 to 4 degrees Celsius over the coming century, the sea itself will only warm by about 2 degrees or so. “Basically, the difference between the water and the land becomes a smaller with time,” he says.</p><p>That, in turn, amplifies the pressure differential, adding to the high-pressure area that drives a clockwise circulation pattern of winds surrounding the Mediterranean basin. And because of the specifics of local topography, projections show the two areas hardest hit by the drying trend will be the northwest Africa, including Morocco, and the eastern Mediterranean region, including Turkey and the Levant.</p><p>That trend is not just a projection, but has already become apparent in recent climate trends across the Middle East and western North Africa, the researchers say. “These are areas where we already detect declines in precipitation,” Eltahir says. It’s possible that these rainfall declines in an already parched region may even have contributed to the political unrest in the region, he says.</p><p>“We document from the observed record of precipitation that this eastern part has already experienced a significant decline of precipitation,” Eltahir says. The fact that the underlying physical processes are now understood will help to ensure that these projections should be taken seriously by planners in the region, he says. It will provide much greater confidence, he says, by enabling them “to understand the exact mechanisms by which that change is going to happen.”</p><p>Eltahir has been working with government agencies in Morocco to help them translate this information into concrete planning. “We are trying to take these projections and see what would be the impacts on availability of water,” he says. “That potentially will have a lot of impact on how Morocco plans its water resources, and also how they could develop technologies that could help them alleviate those impacts through better management of water at the field scale, or maybe through precision agriculture using higher technology.”</p><p>The work was supported by the collaborative research program between Université Mohamed VI Polytechnique in Morocco and MIT.</p> Global climate models agree that the Mediterranean area will be significantly drier, potentially seeing 40 percent less precipitation during the winter rainy season in the already parched regions of the Middle East and North Africa. https://news.mit.edu/2020/professor-jinhua-zhao-0616 Associate Professor Jinhua Zhao, who will direct the new MIT Mobility Initiative, brings behavioral science to urban transportation. Mon, 15 Jun 2020 23:59:59 -0400 https://news.mit.edu/2020/professor-jinhua-zhao-0616 Peter Dizikes | MIT News Office <p>It’s easy to think of urban mobility strictly in terms of infrastructure: Does an area have the right rail lines, bus lanes, or bike paths? How much parking is available? How well might autonomous vehicles work? MIT Associate Professor Jinhua Zhao views matters a bit differently, however.</p><p>To understand urban movement, Zhao believes, we also need to understand people. How does everyone choose to use transport? Why do they move around, and when? How does their self-image influence their choices?</p><p>“The main part of my own thinking is the recognition that transportation systems are half physical infrastructure, and half human beings,” Zhao says.</p><p>Now, after two decades as a student and professor at MIT, he has built up an impressive body of research flowing from this approach. A bit like the best mobility systems, Zhao’s work is multimodal. He divides his scholarship into three main themes. The first covers the behavioral foundations of urban mobility: the attitudinal and emotional aspects of transportation, such as the pride people take in vehicle ownership, the experience of time spent in transit, and the decision making that results in large-scale mobility patterns within urban regions.</p><p>Zhao’s second area of scholarship applies these kinds of insights to design work, exploring how to structure mobility systems with behavioral concepts in mind. What are people’s risk preferences concerning autonomous vehicles? Will people use them in concert with existing transit? How do people’s individual characteristics affect their willingness to take ride-sharing opportunities?</p><p>Zhao’s third theme is policy-oriented: Do mobility systems provide access and fairness? Are they met with acceptance? Here Zhao’s work ranges across countries, including China, Singapore, the U.K., and the U.S., examining topics like access to rail, compliance with laws, and the public perception of transportation systems.</p><p>Within these themes, a tour of Zhao’s research reveals specific results across a wide swath of transportation issues. He has studied how multimodal smartcards affect passenger behavior (they distinctly help commuters); examined the effects of off-peak discounts on subway ridership (they reduce crowding); quantified “car pride,” the sense in which car ownership stems from social status concerns (it’s prevalent in developing countries, plus the U.S.). He has also observed how a legacy of rail transit relates to car-ownership rates even after rail lines vanish, and discovered how potential discriminatory attitudes with respect to class and race influence preferences toward ridesharing.</p><p>“People make decisions in all sorts of different ways,” Zhao says. “The notion that people wake up and calculate the utility of taking the car versus taking the bus — or walking, or cycling — and find the one that maximizes their utility doesn’t speak to reality.”</p><p>Zhao also wants to make sure that decision makers recognize the importance of these personal factors in the overall success of their mobility systems.</p><p>“I study policy from the individual subject’s point of view,” says Zhao. “I’m a citizen. How do I think about it? Do I think this is fair? Do I understand it enough? Do I comply with the policy? It is more of a behavioral approach to policy studies.”</p><p>To be sure, Zhao is more than a researcher; he is an active mentor of MIT students, having been director of the JTL Urban Mobility Lab and the MIT Transit Lab, and chair of the PhD program in the Department of Urban Studies and Planning (DUSP). And at the MIT Energy Initiative (MITEI), Zhao is also co-director of the MITEI Mobility System Center. For his research and teaching, Zhao was awarded tenure last year at MIT.</p><p>This May, Zhao added another important role to his brief: He was named director of the new <a href=”https://www.mobilityinitiative.mit.edu/”>MIT Mobility Initiative</a>, an Institute-wide effort designed to cultivate a dynamic intellectual community on mobility and transportation, redefine the interdisciplinary education program, and effect fundamental changes in the long-term trajectory of mobility development in the world.</p><p>“We are at the dawn of the most profound changes in transportation: an unprecedented combination of new technologies, such as autonomy, electrification, computation and AI, and new objectives, including decarbonization, public health, economic vibrancy, data security and privacy, and social justice,” says Zhao. “The timeframe for these changes — decarbonization in particular — is short in a system with massive amounts of fixed, long-life assets and entrenched behavior and culture. It’s this combination of new technologies, new purposes, and urgent timeframes that makes an MIT-led Mobility Initiative critical at this moment.”</p><p><strong>How much can preferences be shaped?</strong></p><p>Zhao says the current time is an “exhilarating” age for transportation scholarship. And questions surrounding the shape of mobility systems will likely only grow due to the uncertainties introduced by the ongoing Covid-19 pandemic.</p><p>“If in the 1980s you asked people what the [mobility] system would look like 20 years in the future, they would say it would probably be the same,” Zhao says. “Now, really nobody knows what it will it look like.”</p><p>Zhao grew up in China and attended Tongji University in Shanghai, graduating with a bachelor’s degree in planning in 2001. He then came to MIT for his graduate studies, emerging with three degrees from DUSP: a master’s in city planning and a master’s in transportation, in 2004, and a PhD in 2009.</p><p>For his doctoral dissertation, working with Joseph Ferreira of DUSP and Nigel Wilson of the Department of Civil and Environmental Engineering, Zhao examined what he calls “preference-accommodating versus preference-shaping” approaches to urban mobility.</p><p>The preference-accommodating approach, Zhao says, assumes that “people know what they want, and no one else has any right to say” what those tastes should be. But the preference-shaping approach asks, “To the degree preferences can be shaped, should they?” Tastes that we think of as almost instinctual, like the love of cars in the U.S., are much more the result of commercial influence than we usually recognize, he believes.</p><p>While that distinction was already important to Zhao when he was a student, the acceleration of climate change has made it a more urgent issue now: Can people be nudged toward a lifestyle that centers more around sustainable modes of transportation?</p><p>“People like cars today,” Zhao says. “But the auto industry spends hundreds of millions of dollars annually to construct those preferences. If every one of the 7.7 billion human beings strives to have a car as part of a successful life, no technical solutions exist today to satisfy this desire without destroying our planet.”</p><p>For Zhao, this is not an abstract discussion. A few years ago, Zhao and his colleagues Fred Salvucci, John Attanucci, and Julie Newman helped work on reforms to MIT’s own acclaimed transportation policy. Those changes <a href=”http://news.mit.edu/2016/access-mit-program-offers-free-public-transit-to-mit-employees-0614″>fully subsidized</a> mass transit for employees and altered campus parking fees, resulting in fewer single-occupant vehicles commuting to the Institute, reduced parking demand, and greater employee satisfaction.</p><p><strong>Pursuing “joyful” time in the classroom</strong></p><p>For all his research productivity, Zhao considers teaching to be at the core of his MIT responsibilities; he has received the “<a href=”http://news.mit.edu/2020/faithfully-supporting-wellbeing-0417″>Committed to Caring</a>” award by MIT’s Office of Graduate Education and considers classroom discussions to be the most energizing part of his job.</p><p>“That’s really the most joyful time I have here,” Zhao says.</p><p>Indeed, Zhao emphasizes, students are an the essential fuel powering MIT’s notably interdisciplinary activities.</p><p>“I find that students are often the intermediaries that connect faculty,” Zhao says. “Most of my PhD students construct a dissertation committee that, beyond me as a supervisor, has faculty from other departments. That student will get input from economists, computer scientists, business professors. And that student brings three to four faculty together that would otherwise rarely talk to each other. I explicitly encourage students to do that, and they really enjoy it.”</p><p>His own research will always be a work in progress, Zhao says. Cities are complex, mobility systems are intricate, and the needs of people are ever-changing. So there will always be new problems for planners to study — and perhaps answer.</p><p>“Urban mobility is not something that a few brilliant researchers can work on for a year and solve,” Zhao concludes. “We have to have some degree of humility to accept its complexity.”</p> “If in the 1980s you asked people what would the [mobility] system look like 20 years in the future, they would say it would probably be the same,” Associate Professor Jinhua Zhao says. “Now, really nobody knows what it will it look like.” Image: Illustration by Jose-Luis Olivares, MIT. Based on a photo by Martin Dee. https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology.” Mon, 15 Jun 2020 14:10:01 -0400 https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 Kathryn M. O’Neill | MIT Energy Initiative <p>Joel Jean PhD ’17 spent two years working on <a href=”http://energy.mit.edu/research/future-solar-energy/” target=”_blank”><em>The Future of Solar Energy</em></a>, a report published by the MIT Energy Initiative (MITEI) in 2015. Today, he is striving to create that future as CEO of Swift Solar, a startup that is developing lightweight solar panels based on perovskite semiconductors.</p> <p>It hasn’t been a straight path, but Jean says his motivation — one he shares with his five co-founders — is the drive to address climate change. “The whole world is finally starting to see the threat of climate change and that there are many benefits to clean energy. That’s why we see such huge potential for new energy technologies,” he says.</p> <p>Max Hoerantner, co-founder and Swift Solar’s vice president of engineering, agrees. “It’s highly motivating to have the opportunity to put a dent into the climate change crisis with the technology that we’ve developed during our PhDs and postdocs.”</p> <p>The company’s international team of founders — from the Netherlands, Austria, Australia, the United Kingdom, and the United States — has developed a product with the potential to greatly increase the use of solar power: a very lightweight, super-efficient, inexpensive, and scalable solar cell.</p> <p>Jean and Hoerantner also have experience building a solar research team, gained working at <a href=”https://gridedgesolar.org/” target=”_blank”>GridEdge Solar</a>, an interdisciplinary MIT research program that works toward scalable solar and is funded by the Tata Trusts and run out of MITEI’s <a href=”https://tatacenter.mit.edu/” target=”_blank”>Tata Center for Technology and Design</a>.</p> <p>“The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology,” says <a href=”http://energy.mit.edu/profile/vladimir-bulovic/” target=”_blank”>Vladimir Bulović</a>, the Fariborz Maseeh (1990) Professor of Emerging Technology in MIT’s Department of Electrical Engineering and Computer Science, director of MIT.nano, and a science advisor for Swift Solar.</p> <p><strong>Tandem photovoltaics</strong></p> <p>The product begins with perovskites — a class of materials that are cheap, abundant, and great at absorbing and emitting light, making them good semiconductors for solar energy conversion.</p> <p>Using perovskites for solar generation took off about 10 years ago because the materials can be much more efficient at converting sunlight to electricity than the crystalline silicon typically used in solar panels today. They are also lightweight and flexible, whereas crystalline silicon is so brittle it needs to be protected by rigid glass, making most solar panels today about as large and heavy as a patio door.</p> <p>Many researchers and entrepreneurs have rushed to capitalize on those advantages, but Swift Solar has two core technologies that its founders see as their competitive edge. First, they are using two layers of perovskites in tandem to boost efficiency. “We’re putting two perovskite solar cells stacked on top of each other, each absorbing different parts of the spectrum,” Hoerantner says. Second, Swift Solar employs a proprietary scalable deposition process to create its perovskite films, which drives down manufacturing costs.</p> <p>“We’re the only company focusing on high-efficiency all-perovskite tandems. They’re hard to make, but we believe that’s where the market is ultimately going to go,” Jean says.</p> <p>“Our technologies enable much cheaper and more ubiquitous solar power through cheaper production, reduced installation costs, and more power per unit area,” says Sam Stranks, co-founder and lead scientific advisor for Swift Solar as well as an assistant professor in the Department of Chemical Engineering and Biotechnology at the University of Cambridge in the United Kingdom. “Other commercial solar photovoltaic technologies can do one or the other [providing either high power or light weight and flexibility], but not both.”</p> <p>Bulović says technology isn’t the only reason he expects the company to make a positive impact on the energy sector. “The success of a startup is initiated by the quality of the first technical ideas, but is sustained by the quality of the team that builds and grows the technology,” he says. “Swift Solar’s team is extraordinary.”</p> <p>Indeed, Swift Solar’s six co-founders together have six PhDs, four Forbes 30 Under 30 fellowships, and more than 80,000 citations. Four of them — Tomas Leijtens, Giles Eperon, Hoerantner, and Stranks — earned their doctorates at Oxford University in the United Kingdom, working with one of the pioneers of perovskite photovoltaics, Professor Henry Snaith. Stranks then came to MIT to work with Bulović, who is also widely recognized as a leader in next-generation photovoltaics and an experienced entrepreneur. (Bulović is a co-inventor of some of the patents the business is licensing from MIT.)</p> <p>Stranks met Jean at MIT, where Hoerantner later completed a postdoc working at GridEdge Solar. And the sixth co-founder, Kevin Bush, completed his PhD at Stanford University, where Leijtens did a postdoc with Professor Michael McGehee, another leading perovskite researcher and advisor to Swift. What ultimately drew them all together was the desire to address climate change.</p> <p>“We were all independently thinking about how we could have an impact on climate change using solar technology, and a startup seemed like the only real direction that could have an impact at the scale the climate demands,” Jean says. The team first met in a Google Hangouts session spanning three time zones in early 2016. Swift Solar was officially launched in November 2017.</p> <p><strong>MITEI study</strong></p> <p>Interestingly, Jean says it was his work on <em>The Future of Solar Energy</em> — rather than his work in the lab — that most contributed to his role in the founding of Swift Solar. The study team of more than 30 experts, including Jean and Bulović, investigated the potential for expanding solar generating capacity to the multi-terawatt scale by mid-century. They determined that the main goal of U.S. solar policy should be to build the foundation for a massive scale-up of solar generation over the next few decades.</p> <p>“I worked on quantum dot and organic solar cells for most of my PhD, but I also spent a lot of time looking at energy policy and economics, talking to entrepreneurs, and thinking about what it would take to succeed in tomorrow’s solar market. That made me less wedded to a single technology,” Jean says.</p> <p>Jean’s work on the study led to a much cited publication, “<a href=”https://pubs.rsc.org/en/content/articlelanding/2015/ee/c4ee04073b#!divAbstract” target=”_blank”>Pathways for Solar Photovoltaics</a>” in <em>Energy &amp; Environmental Science</em> (2015), and to his founding leadership role with GridEdge Solar. “Technical advancements and insights gained in this program helped launch Swift Solar as a hub for novel lightweight solar technology,” Bulović says.</p> <p>Swift Solar has also benefited from MIT’s entrepreneurial ecosystem, Jean says, noting that he took 15.366 (MIT Energy Ventures), a class on founding startups, and got assistance from the Venture Mentoring Service. “There were a lot of experiences like that that have really informed where we’re going as a company,” he says.</p> <p>Stranks adds, “MIT provided a thriving environment for exploring commercialization ideas in parallel to our tech development. Very few places could combine both so dynamically.”</p> <p>Swift Solar raised its first seed round of funding in 2018 and moved to the Bay Area of California last summer after incubating for a year at the U.S. Department of Energy’s National Renewable Energy Laboratory in Golden, Colorado. The team is now working to develop its manufacturing processes so that it can scale its technology up from the lab to the marketplace.</p> <p>The founders say their first goal is to develop specialized high-performance products for applications that require high efficiency and light weight, such as unmanned aerial vehicles and other mobile applications. “Wherever there is a need for solar energy and lightweight panels that can be deployed in a flexible way, our products will find a good use,” Hoerantner says.</p> <p>Scaling up will take time, but team members say the high stakes associated with climate change make all the effort worthwhile.</p> <p>“My vision is that we will be able to grow quickly and efficiently to realize our first products within the next two years, and to supply panels for rooftop and utility-scale solar applications in the longer term, helping the world rapidly transform to an electrified, low-carbon future,” Stranks says.</p> <p><em>This article appears in the&nbsp;</em><a href=”http://energy.mit.edu/energy-futures/spring-2020/” target=”_blank”>Spring 2020</a><em> issue&nbsp;of&nbsp;</em>Energy Futures<em>, the magazine of the MIT Energy Initiative.</em></p> Joel Jean PhD ’17, co-founder of Swift Solar, stands in front of the company’s sign at its permanent location in San Carlos, California. Photo courtesy of Joel Jean. https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology.” Mon, 15 Jun 2020 14:10:01 -0400 https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 Kathryn M. O’Neill | MIT Energy Initiative <p>Joel Jean PhD ’17 spent two years working on <a href=”http://energy.mit.edu/research/future-solar-energy/” target=”_blank”><em>The Future of Solar Energy</em></a>, a report published by the MIT Energy Initiative (MITEI) in 2015. Today, he is striving to create that future as CEO of Swift Solar, a startup that is developing lightweight solar panels based on perovskite semiconductors.</p> <p>It hasn’t been a straight path, but Jean says his motivation — one he shares with his five co-founders — is the drive to address climate change. “The whole world is finally starting to see the threat of climate change and that there are many benefits to clean energy. That’s why we see such huge potential for new energy technologies,” he says.</p> <p>Max Hoerantner, co-founder and Swift Solar’s vice president of engineering, agrees. “It’s highly motivating to have the opportunity to put a dent into the climate change crisis with the technology that we’ve developed during our PhDs and postdocs.”</p> <p>The company’s international team of founders — from the Netherlands, Austria, Australia, the United Kingdom, and the United States — has developed a product with the potential to greatly increase the use of solar power: a very lightweight, super-efficient, inexpensive, and scalable solar cell.</p> <p>Jean and Hoerantner also have experience building a solar research team, gained working at <a href=”https://gridedgesolar.org/” target=”_blank”>GridEdge Solar</a>, an interdisciplinary MIT research program that works toward scalable solar and is funded by the Tata Trusts and run out of MITEI’s <a href=”https://tatacenter.mit.edu/” target=”_blank”>Tata Center for Technology and Design</a>.</p> <p>“The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology,” says <a href=”http://energy.mit.edu/profile/vladimir-bulovic/” target=”_blank”>Vladimir Bulović</a>, the Fariborz Maseeh (1990) Professor of Emerging Technology in MIT’s Department of Electrical Engineering and Computer Science, director of MIT.nano, and a science advisor for Swift Solar.</p> <p><strong>Tandem photovoltaics</strong></p> <p>The product begins with perovskites — a class of materials that are cheap, abundant, and great at absorbing and emitting light, making them good semiconductors for solar energy conversion.</p> <p>Using perovskites for solar generation took off about 10 years ago because the materials can be much more efficient at converting sunlight to electricity than the crystalline silicon typically used in solar panels today. They are also lightweight and flexible, whereas crystalline silicon is so brittle it needs to be protected by rigid glass, making most solar panels today about as large and heavy as a patio door.</p> <p>Many researchers and entrepreneurs have rushed to capitalize on those advantages, but Swift Solar has two core technologies that its founders see as their competitive edge. First, they are using two layers of perovskites in tandem to boost efficiency. “We’re putting two perovskite solar cells stacked on top of each other, each absorbing different parts of the spectrum,” Hoerantner says. Second, Swift Solar employs a proprietary scalable deposition process to create its perovskite films, which drives down manufacturing costs.</p> <p>“We’re the only company focusing on high-efficiency all-perovskite tandems. They’re hard to make, but we believe that’s where the market is ultimately going to go,” Jean says.</p> <p>“Our technologies enable much cheaper and more ubiquitous solar power through cheaper production, reduced installation costs, and more power per unit area,” says Sam Stranks, co-founder and lead scientific advisor for Swift Solar as well as an assistant professor in the Department of Chemical Engineering and Biotechnology at the University of Cambridge in the United Kingdom. “Other commercial solar photovoltaic technologies can do one or the other [providing either high power or light weight and flexibility], but not both.”</p> <p>Bulović says technology isn’t the only reason he expects the company to make a positive impact on the energy sector. “The success of a startup is initiated by the quality of the first technical ideas, but is sustained by the quality of the team that builds and grows the technology,” he says. “Swift Solar’s team is extraordinary.”</p> <p>Indeed, Swift Solar’s six co-founders together have six PhDs, four Forbes 30 Under 30 fellowships, and more than 80,000 citations. Four of them — Tomas Leijtens, Giles Eperon, Hoerantner, and Stranks — earned their doctorates at Oxford University in the United Kingdom, working with one of the pioneers of perovskite photovoltaics, Professor Henry Snaith. Stranks then came to MIT to work with Bulović, who is also widely recognized as a leader in next-generation photovoltaics and an experienced entrepreneur. (Bulović is a co-inventor of some of the patents the business is licensing from MIT.)</p> <p>Stranks met Jean at MIT, where Hoerantner later completed a postdoc working at GridEdge Solar. And the sixth co-founder, Kevin Bush, completed his PhD at Stanford University, where Leijtens did a postdoc with Professor Michael McGehee, another leading perovskite researcher and advisor to Swift. What ultimately drew them all together was the desire to address climate change.</p> <p>“We were all independently thinking about how we could have an impact on climate change using solar technology, and a startup seemed like the only real direction that could have an impact at the scale the climate demands,” Jean says. The team first met in a Google Hangouts session spanning three time zones in early 2016. Swift Solar was officially launched in November 2017.</p> <p><strong>MITEI study</strong></p> <p>Interestingly, Jean says it was his work on <em>The Future of Solar Energy</em> — rather than his work in the lab — that most contributed to his role in the founding of Swift Solar. The study team of more than 30 experts, including Jean and Bulović, investigated the potential for expanding solar generating capacity to the multi-terawatt scale by mid-century. They determined that the main goal of U.S. solar policy should be to build the foundation for a massive scale-up of solar generation over the next few decades.</p> <p>“I worked on quantum dot and organic solar cells for most of my PhD, but I also spent a lot of time looking at energy policy and economics, talking to entrepreneurs, and thinking about what it would take to succeed in tomorrow’s solar market. That made me less wedded to a single technology,” Jean says.</p> <p>Jean’s work on the study led to a much cited publication, “<a href=”https://pubs.rsc.org/en/content/articlelanding/2015/ee/c4ee04073b#!divAbstract” target=”_blank”>Pathways for Solar Photovoltaics</a>” in <em>Energy &amp; Environmental Science</em> (2015), and to his founding leadership role with GridEdge Solar. “Technical advancements and insights gained in this program helped launch Swift Solar as a hub for novel lightweight solar technology,” Bulović says.</p> <p>Swift Solar has also benefited from MIT’s entrepreneurial ecosystem, Jean says, noting that he took 15.366 (MIT Energy Ventures), a class on founding startups, and got assistance from the Venture Mentoring Service. “There were a lot of experiences like that that have really informed where we’re going as a company,” he says.</p> <p>Stranks adds, “MIT provided a thriving environment for exploring commercialization ideas in parallel to our tech development. Very few places could combine both so dynamically.”</p> <p>Swift Solar raised its first seed round of funding in 2018 and moved to the Bay Area of California last summer after incubating for a year at the U.S. Department of Energy’s National Renewable Energy Laboratory in Golden, Colorado. The team is now working to develop its manufacturing processes so that it can scale its technology up from the lab to the marketplace.</p> <p>The founders say their first goal is to develop specialized high-performance products for applications that require high efficiency and light weight, such as unmanned aerial vehicles and other mobile applications. “Wherever there is a need for solar energy and lightweight panels that can be deployed in a flexible way, our products will find a good use,” Hoerantner says.</p> <p>Scaling up will take time, but team members say the high stakes associated with climate change make all the effort worthwhile.</p> <p>“My vision is that we will be able to grow quickly and efficiently to realize our first products within the next two years, and to supply panels for rooftop and utility-scale solar applications in the longer term, helping the world rapidly transform to an electrified, low-carbon future,” Stranks says.</p> <p><em>This article appears in the&nbsp;</em><a href=”http://energy.mit.edu/energy-futures/spring-2020/” target=”_blank”>Spring 2020</a><em> issue&nbsp;of&nbsp;</em>Energy Futures<em>, the magazine of the MIT Energy Initiative.</em></p> Joel Jean PhD ’17, co-founder of Swift Solar, stands in front of the company’s sign at its permanent location in San Carlos, California. Photo courtesy of Joel Jean. https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 New model answers longstanding question of how these sudden flows happen; may expand understanding of Antarctic ice sheets. Fri, 12 Jun 2020 15:17:33 -0400 https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 Jennifer Chu | MIT News Office <p>About 10 percent of the Earth’s land mass is covered in glaciers, most of which slip slowly across the land over years, carving fjords and trailing rivers in their wake. But about 1 percent of glaciers can suddenly surge, spilling over the land at 10 to 100 times their normal speed.&nbsp;</p><p>When this happens, a glacial surge can set off avalanches, flood rivers and lakes, and overwhelm downstream settlements. What triggers the surges themselves has been a longstanding question in the field of glaciology.&nbsp;</p><p>Now scientists at MIT and Dartmouth College have developed a model that pins down the conditions that would trigger a glacier to surge. Through their model, the researchers find that glacial surge is driven by the conditions of the underlying sediment, and specifically by the tiny grains of sediment that lie beneath a towering glacier.</p><p>“There’s a huge separation of scales: Glaciers are these massive things, and it turns out that their flow, this incredible amount of momentum, is somehow driven by grains of millimeter-scale sediment,” says Brent Minchew, the Cecil and Ida Green Assistant Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “That’s a hard thing to get your head around. And it’s exciting to open up this whole new line of inquiry that nobody had really considered before.”</p><p>The new model of glacial surge may also help scientists better understand the behavior of larger masses of moving ice.&nbsp;</p><p>“We think of glacial surges as natural laboratories,” Minchew says. “Because they’re this extreme, transient event, glacial surges give us this window into how other systems work, such as the fast-flowing streams in Antarctica, which are the things that matter for sea-level rise.”</p><p>Minchew and his co-author Colin Meyer of Dartmouth have published their results this month in the journal&nbsp;<em>Proceedings of the Royal Society A</em>.&nbsp;</p><p><strong>A glacier breaks loose</strong></p><p>While he was still a PhD student, Minchew was reading through “The Physics of Glaciers,” the standard textbook in the field of glaciology, when he came across a rather bleak passage on the prospect of modeling a glacial surge. The passage outlined the basic requirements of such a model and closed with a pessimistic outlook, noting that “such a model has not been established, and none is in view.”</p><p>Rather than be discouraged, Minchew took this statement as a challenge, and as part of his thesis began to lay out the framework for a model to describe the triggering events for a glacial surge.&nbsp;</p><p>As he quickly realized, the handful of models that existed at the time were based on the assumption that most surge-type glaciers lay atop bedrock — rough and impermeable surfaces that the models assumed remained unchanged as glaciers flowed across. But scientists have since observed that glacial surges often occur not over solid rock, but instead across shifting sediment.</p><p>Minchew’s model simulates a glacier’s movement over a permeable layer of sediment, made up of individual grains, the size of which he can adjust in the model to study both the interactions of the grains within the sediment, and ultimately, the glacier’s movement in response.&nbsp;</p><p>The new model shows that as a glacier moves at a normal rate across a sediment bed, the grains at the top of the sediment layer, in direct contact with the glacier, are dragged along with the glacier at the same speed, while the grains toward the middle move slower, and those at the bottom stay put.&nbsp;</p><p>This layered shifting of grains creates a shearing effect within the sediment layer. At the microscale, the model shows that this shearing occurs in the form of individual sediment grains that roll up and over each other. As grains roll up, over, and away with the glacier, they open up spaces within the water-saturated sediment layer that expand, providing pockets for the water to seep into. This creates a decrease in water pressure, which acts to strengthen the sedimentary material as a whole, creating a sort of resistance against the sediment’s grains and making it harder for them to roll along with the moving glacier.&nbsp;</p><p>However, as a glacier accumulates snowfall, it thickens and its surface steepens, which increases the shear forces acting on the sediment. As the sediment weakens, the glacier starts flowing faster and faster.&nbsp;</p><p>“The faster it flows, the more the glacier thins, and as you start to thin, you’re decreasing the load to the sediment, because you’re decreasing the weight of the ice. So you’re bringing the weight of the ice closer to the sediment’s water pressure. And that ends up weakening the sediment,” Minchew explains. “Once that happens, everything starts to break loose, and you get a surge.”&nbsp;</p><p><strong>Antarctic shearing</strong></p><p>As a test of their model, the researchers compared predictions of their model to observations of two glaciers that recently experienced surges, and found that the model was able to reproduce the flow rates of both glaciers with reasonable precision.&nbsp;</p><p>In order to predict which glaciers will surge and when, the researchers say scientists will have to know something about the strength of the underlying sediment, and in particular, the size distribution of the sediment’s grains. If these measurements can be made of a particular glacier’s environment, the new model can be used to predict when and by how much that glacier will surge.&nbsp;</p><p>Beyond glacial surges, Minchew hopes the new model will help to illuminate the mechanics of ice flow in other systems, such as the ice sheets in West Antarctica.&nbsp;</p><p>“It’s within the realm of possibility that we could get 1 to 3 meters of sea-level rise from West Antarctica within our lifetimes,” Minchew says. This type of shearing mechanism in glacial surges could play a major role in determining the rates of sea-level rise you’d get from West Antarctica.”</p><p>This research was funded, in part, by the U.S. National Science Foundation and NASA.</p> A surging glacier in the St. Elias Mountains, Canada. Credit: Gwenn Flowers https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 New model answers longstanding question of how these sudden flows happen; may expand understanding of Antarctic ice sheets. Fri, 12 Jun 2020 15:17:33 -0400 https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 Jennifer Chu | MIT News Office <p>About 10 percent of the Earth’s land mass is covered in glaciers, most of which slip slowly across the land over years, carving fjords and trailing rivers in their wake. But about 1 percent of glaciers can suddenly surge, spilling over the land at 10 to 100 times their normal speed.&nbsp;</p><p>When this happens, a glacial surge can set off avalanches, flood rivers and lakes, and overwhelm downstream settlements. What triggers the surges themselves has been a longstanding question in the field of glaciology.&nbsp;</p><p>Now scientists at MIT and Dartmouth College have developed a model that pins down the conditions that would trigger a glacier to surge. Through their model, the researchers find that glacial surge is driven by the conditions of the underlying sediment, and specifically by the tiny grains of sediment that lie beneath a towering glacier.</p><p>“There’s a huge separation of scales: Glaciers are these massive things, and it turns out that their flow, this incredible amount of momentum, is somehow driven by grains of millimeter-scale sediment,” says Brent Minchew, the Cecil and Ida Green Assistant Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “That’s a hard thing to get your head around. And it’s exciting to open up this whole new line of inquiry that nobody had really considered before.”</p><p>The new model of glacial surge may also help scientists better understand the behavior of larger masses of moving ice.&nbsp;</p><p>“We think of glacial surges as natural laboratories,” Minchew says. “Because they’re this extreme, transient event, glacial surges give us this window into how other systems work, such as the fast-flowing streams in Antarctica, which are the things that matter for sea-level rise.”</p><p>Minchew and his co-author Colin Meyer of Dartmouth have published their results this month in the journal&nbsp;<em>Proceedings of the Royal Society A</em>.&nbsp;</p><p><strong>A glacier breaks loose</strong></p><p>While he was still a PhD student, Minchew was reading through “The Physics of Glaciers,” the standard textbook in the field of glaciology, when he came across a rather bleak passage on the prospect of modeling a glacial surge. The passage outlined the basic requirements of such a model and closed with a pessimistic outlook, noting that “such a model has not been established, and none is in view.”</p><p>Rather than be discouraged, Minchew took this statement as a challenge, and as part of his thesis began to lay out the framework for a model to describe the triggering events for a glacial surge.&nbsp;</p><p>As he quickly realized, the handful of models that existed at the time were based on the assumption that most surge-type glaciers lay atop bedrock — rough and impermeable surfaces that the models assumed remained unchanged as glaciers flowed across. But scientists have since observed that glacial surges often occur not over solid rock, but instead across shifting sediment.</p><p>Minchew’s model simulates a glacier’s movement over a permeable layer of sediment, made up of individual grains, the size of which he can adjust in the model to study both the interactions of the grains within the sediment, and ultimately, the glacier’s movement in response.&nbsp;</p><p>The new model shows that as a glacier moves at a normal rate across a sediment bed, the grains at the top of the sediment layer, in direct contact with the glacier, are dragged along with the glacier at the same speed, while the grains toward the middle move slower, and those at the bottom stay put.&nbsp;</p><p>This layered shifting of grains creates a shearing effect within the sediment layer. At the microscale, the model shows that this shearing occurs in the form of individual sediment grains that roll up and over each other. As grains roll up, over, and away with the glacier, they open up spaces within the water-saturated sediment layer that expand, providing pockets for the water to seep into. This creates a decrease in water pressure, which acts to strengthen the sedimentary material as a whole, creating a sort of resistance against the sediment’s grains and making it harder for them to roll along with the moving glacier.&nbsp;</p><p>However, as a glacier accumulates snowfall, it thickens and its surface steepens, which increases the shear forces acting on the sediment. As the sediment weakens, the glacier starts flowing faster and faster.&nbsp;</p><p>“The faster it flows, the more the glacier thins, and as you start to thin, you’re decreasing the load to the sediment, because you’re decreasing the weight of the ice. So you’re bringing the weight of the ice closer to the sediment’s water pressure. And that ends up weakening the sediment,” Minchew explains. “Once that happens, everything starts to break loose, and you get a surge.”&nbsp;</p><p><strong>Antarctic shearing</strong></p><p>As a test of their model, the researchers compared predictions of their model to observations of two glaciers that recently experienced surges, and found that the model was able to reproduce the flow rates of both glaciers with reasonable precision.&nbsp;</p><p>In order to predict which glaciers will surge and when, the researchers say scientists will have to know something about the strength of the underlying sediment, and in particular, the size distribution of the sediment’s grains. If these measurements can be made of a particular glacier’s environment, the new model can be used to predict when and by how much that glacier will surge.&nbsp;</p><p>Beyond glacial surges, Minchew hopes the new model will help to illuminate the mechanics of ice flow in other systems, such as the ice sheets in West Antarctica.&nbsp;</p><p>“It’s within the realm of possibility that we could get 1 to 3 meters of sea-level rise from West Antarctica within our lifetimes,” Minchew says. This type of shearing mechanism in glacial surges could play a major role in determining the rates of sea-level rise you’d get from West Antarctica.”</p><p>This research was funded, in part, by the U.S. National Science Foundation and NASA.</p> A surging glacier in the St. Elias Mountains, Canada. Credit: Gwenn Flowers https://news.mit.edu/2020/secrets-of-plastic-eating-enzyme-petase-improve-recycling-linda-zhong-0608 Graduate student Linda Zhong and professor of biology Anthony Sinskey are studying the plastic-devouring enzyme PETase as a way to improve recycling. Mon, 08 Jun 2020 14:15:01 -0400 https://news.mit.edu/2020/secrets-of-plastic-eating-enzyme-petase-improve-recycling-linda-zhong-0608 Fernanda Ferreira | School of Science <p>It was during a cruise in Alaska that Linda Zhong realized that the world didn’t have to be full of plastic. “I grew up in cities, so you’re very used to seeing all kinds of trash everywhere,” says the graduate student in microbiology. Zhong, who is Canadian and lived in Ottawa growing up and in Toronto during college, routinely saw trash in the waters of the Ottawa River and on the beaches around Lake Ontario. “You never see it as anything other than normal.”</p> <p>Alaska changed that. Seeing the pristine, plastic-free landscape, Zhong decided to find a way to get rid of plastic waste. “I’m a biologist, so I approached it from a biological standpoint,” she says.</p> <p>Plastic pollution is a global problem. According to the United Nations Environment Program, an estimated 8.3 billion tons of plastic have been produced since the 1950s. More than 60 percent of that has ended up in landfills and the environment. A major type of plastic is polyethylene terephthalate, or PET, which most water bottles are made of. Even though PET is easy to recycle compared to other types of plastic, in reality, it isn’t.</p> <p>“There are two ways to recycle PET: one is mechanical, and the other’s chemical,” says Zhong. Chemical recycling, which converts PET back to its original raw materials, is theoretically a closed loop in terms of material flow, but not so in practice. “For the most part, no one uses it right now because it’s so costly,” explains Zhong.</p> <p>Mechanical recycling involves melting PET into small pellets that can be used to make new products. It’s a much cheaper process but, as Zhong says, it can’t be done infinitely. “Companies can recycle a bottle into another a handful of times before the material is too degraded to make bottles,” she says. When this happens, the degraded material is thrown out, ending up in landfills or in the ocean. Zhong’s ultimate goal is to reduce that massive material loss.</p> <p>Before arriving at MIT, Zhong began looking for organisms that could degrade plastic and learned that a group in Japan had published a paper on <em>Ideonella sakaiensis</em>. “It’s this weird environmental microbe that really likes digesting weird compounds,” says Zhong. One of those weird compounds is PET.</p> <p>With the organism found, Zhong set her sights on the enzyme produced by <em>Ideonella sakaiensis</em> that digests plastic: PETase. When Zhong got into MIT, she brought the project with her and proposed it to her advisor, professor of biology Anthony Sinskey.</p> <p>As Zhong delved into the project, her aims changed. “At the beginning, I really wanted to do a screen and rapidly evolve this enzyme to make it better,” she says. That is still ongoing, but as Zhong learned more about PETase, she realized that there was a huge gap in the field’s understanding of how it works. “I keep finding myself stumbling over what the literature says and what my results show,” says Zhong.</p> <p>This led Zhong to shift her experiments to more fundamental questions. “I started developing methods to study this enzyme in more detail,” says Zhong. Previous assays that looked at PETase would measure the breakdown of plastic 24 hours after the enzyme was added. “My method allows me to start taking measurements within 30 minutes, and it shows me a lot more about what the enzyme does over time.” Zhong explains that understanding how PETase truly works is essential before engineering it to digest plastic more efficiently. “So, getting that fundamental picture of the enzyme and establishing good methods to study it are what I’m focusing on.”</p> <p>Right now, Zhong is working from home due to the Covid-19 health crisis, balancing her time between reading papers and cooking. “It’s sort of my replacement for experiments, since it’s something I do with my hands at a bench,” says Zhong. But cooking isn’t a perfect substitute and she still can’t wait to get back to the lab. “I really want to find the answers to the questions I’ve just started exploring,” Zhong says.</p> <p>&nbsp;Zhong and Sinskey received a grant from the Ally of Nature Fund to help fund the PETase project. The fund was established in 2007 by MIT alumni Audrey Buyrn ’58, SM ’63, PhD ’66 and her late husband Alan Phillips ’57, PhD ’61 to provide support for projects whose purpose is to prevent, reduce, and repair humanity’s impact on the natural environment.</p> <p>In 2019, according to Zhong, the fund was a boon. “Because it was a new project in the lab, we had no funding,” she says. The Ally of Nature grant also has no spending restrictions, which is ideal for a project that has moved beyond bioengineering to encompass biochemistry and fundamental biology. “I didn’t have a budget, because I didn’t know what I needed,” says Zhong. “But now I can buy what I need when I need it.”</p> Graduate student Linda Zhong works in Professor Anthony Sinskey’s biology lab on an answer for plastic pollution. Photo courtesy of Linda Zhong. https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 Study reveals drainage, deforestation of the region’s peatlands, which leads to fires, greenhouse emissions, land subsidence. Thu, 04 Jun 2020 11:00:00 -0400 https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 David L. Chandler | MIT News Office <p>In less than three decades, most of Southeast Asia’s peatlands have been wholly or partially deforested, drained, and dried out. This has released carbon that accumulated over thousands of years from dead plant matter, and has led to rampant wildfires that spew air pollution and greenhouse gases into the atmosphere.</p><p>The startling prevalence of such rapid destruction of the peatlands, and their resulting subsidence, is revealed in a new satellite-based study conducted by researchers at MIT and in Singapore and Oregon. The research was published today in the journal <em>Nature Geoscience</em>, in a paper by Alison Hoyt PhD ’17, a postdoc at the Max Planck Institute for Biogeochemistry; MIT professor of civil and environmental engineering Charles Harvey; and two others.</p> <em><span style=”font-size:10px;”>Video courtesy of Colin Harvey.</span></em> <p>Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. When drained and dried, either to create plantations or to build roads or canals to extract the timber, the peat becomes highly flammable. Even when unburned it rapidly decomposes, releasing its accumulated store of carbon. This loss of stored carbon leads to subsidence, the sinking of the ground surface, in vulnerable coastal areas.</p><p>Until now, measuring the progression of this draining and drying process has required arduous treks through dense forests and wet land, and help from local people who know their way through the remote trackless swampland. There, poles are dug into the ground to provide a reference to measure the subsidence of the land over time as the peat desiccates. The process is arduous and time-consuming, and thus limited in the areas it can cover.</p><p>Now, Hoyt explains, the team was able to use precise satellite elevation data gathered over a three-year period to get detailed measurements of the degree of subsidence over an area of 2.7 million hectares mostly in Malaysia and Indonesia — more than 10 percent of the total area covered by peatlands in the Southeast Asia region. Over 90 percent of the peatland area they studied was subsiding, at an average of almost an inch a year (over 1 foot every 15 years). This subsidence poses a threat to these ecosystems, as most coastal peatlands are at or just above sea level.</p><p>“Peatlands are really unique and carbon rich environments and wetland ecosystems,” Hoyt says. While most previous attempts to quantify their destruction have focused on a few locations or types of land use, by using the satellite data, she says this work represents “the first time that we can make measurements across many different types of land uses rather than just plantations, and across millions of hectares.” This makes it possible to show just how widespread the draining and subsidence of these lands has been.</p><p>“Thirty years ago, or even 20 years ago, this land was covered with pristine rainforest with enormous trees,” Harvey says, and that was still the case even when he began doing research in the area. “In 13 years, I’ve seen almost all of these rainforests just removed. There’s almost none at all anymore, in that short period of time.”</p><p>Because peat is composed almost entirely of organic carbon, measuring how much that land has subsided provides a direct measure of the amount of carbon that has been released into the atmosphere. Unlike other kinds of subsidence seen in drier ecosystems, which can result from compaction of soil, in this case the missing depth of peat reflects matter that has actually been decomposed and lost to the air. “It’s not just compaction. It’s actually mass loss. So measuring rates of subsidence is basically equivalent to measuring emissions of carbon dioxide,” says Harvey, who is also a principal investigator at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore.&nbsp;</p><p>Some analysts had previously thought that the draining of peatland forests to make way for palm oil plantations was the major cause of peatland loss, but the new study shows that subsidence is widespread across peatlands under a diverse set of land uses. This subsidence is driven by the drainage of tropical peatlands, mostly for &nbsp;the expansion of agriculture, as well as from other causes, such as creating canals for floating timber out of the forests, and digging drainage ditches alongside roads, which can drain large surrounding areas. All of these factors, it turns out, have contributed significantly to the extreme loss of peatlands in the region.</p><p>One longstanding controversy that this new research could help to address is how long the peatland subsidence continues after the lands are drained. Plantation owners have said that this is temporary and the land quickly stabilizes, while some conservation advocates say the process continues, leaving large regions highly vulnerable to flooding as sea levels rise, since most of these lands are only slightly above sea level. The new data suggest that the rate of subsidence continues over time, though the rate does slow down.</p><p>The satellite measurements used for this study were gathered between 2007 and 2011 using a method called Interferometric Synthetic Aperture Radar (InSAR), which can detect changes in surface elevation with an accuracy of centimeters or even millimeters. Though the satellites that produced these data sets are no longer in operation, newer Japanese satellites are now gathering similar data, and the team hopes to do followup studies using some of the newer data.</p><p>“This is definitely a proof of concept on how satellite data can help us understand environmental changes happening across the whole region,” Hoyt says. That could help in monitoring regional greenhouse gas output, but could also help in implementing and monitoring local regulations on land use. “This has really exciting management implications, because it could allow us to verify management practices and track hotspots of subsidence,” she says.</p><p>While there has been little interest in the region in curbing peatland drainage in order to curb greenhouse gas emissions, the serious risk of uncontrollable fires in these dried peatlands provides a strong motivation to try to preserve and restore these ecosystems, Harvey says. “These plumes of smoke that engulf the region are a problem that everyone there recognizes.”</p> <p>“This new approach … allows peat subsidence to be easily monitored over very large spatial extents and a diversity of settings that would be impossible using other approaches,” says David Wardle,&nbsp;the&nbsp;Smithsonian Professor of Forest Ecology&nbsp;at Nanyang&nbsp;Technological&nbsp;University in Singapore,&nbsp;who was not associated with this research. “It is in my opinion an important breakthrough that moves forward our understanding of the serious environmental problems that have emerged from peat forest clearing and its conversion and degradation, and, alarmingly, highlights that the problems are worse than we thought they were.”</p><p>The research team also included Estelle Chaussard of the University of Oregon and Sandra Seppalainen ’16. The work was supported by the National Research Foundation, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, the Singapore-MIT Alliance for Research and Technology (SMART), the National Science Foundation, and MIT’s Environmental Solutions Initiative.</p> In this photo, Alison Hoyt stands on top of a log during a research trip in a peat swamp forest in Borneo. Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. Courtesy of Alison Hoyt https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 Study reveals drainage, deforestation of the region’s peatlands, which leads to fires, greenhouse emissions, land subsidence. Thu, 04 Jun 2020 11:00:00 -0400 https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 David L. Chandler | MIT News Office <p>In less than three decades, most of Southeast Asia’s peatlands have been wholly or partially deforested, drained, and dried out. This has released carbon that accumulated over thousands of years from dead plant matter, and has led to rampant wildfires that spew air pollution and greenhouse gases into the atmosphere.</p><p>The startling prevalence of such rapid destruction of the peatlands, and their resulting subsidence, is revealed in a new satellite-based study conducted by researchers at MIT and in Singapore and Oregon. The research was published today in the journal <em>Nature Geoscience</em>, in a paper by Alison Hoyt PhD ’17, a postdoc at the Max Planck Institute for Biogeochemistry; MIT professor of civil and environmental engineering Charles Harvey; and two others.</p> <em><span style=”font-size:10px;”>Video courtesy of Colin Harvey.</span></em> <p>Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. When drained and dried, either to create plantations or to build roads or canals to extract the timber, the peat becomes highly flammable. Even when unburned it rapidly decomposes, releasing its accumulated store of carbon. This loss of stored carbon leads to subsidence, the sinking of the ground surface, in vulnerable coastal areas.</p><p>Until now, measuring the progression of this draining and drying process has required arduous treks through dense forests and wet land, and help from local people who know their way through the remote trackless swampland. There, poles are dug into the ground to provide a reference to measure the subsidence of the land over time as the peat desiccates. The process is arduous and time-consuming, and thus limited in the areas it can cover.</p><p>Now, Hoyt explains, the team was able to use precise satellite elevation data gathered over a three-year period to get detailed measurements of the degree of subsidence over an area of 2.7 million hectares mostly in Malaysia and Indonesia — more than 10 percent of the total area covered by peatlands in the Southeast Asia region. Over 90 percent of the peatland area they studied was subsiding, at an average of almost an inch a year (over 1 foot every 15 years). This subsidence poses a threat to these ecosystems, as most coastal peatlands are at or just above sea level.</p><p>“Peatlands are really unique and carbon rich environments and wetland ecosystems,” Hoyt says. While most previous attempts to quantify their destruction have focused on a few locations or types of land use, by using the satellite data, she says this work represents “the first time that we can make measurements across many different types of land uses rather than just plantations, and across millions of hectares.” This makes it possible to show just how widespread the draining and subsidence of these lands has been.</p><p>“Thirty years ago, or even 20 years ago, this land was covered with pristine rainforest with enormous trees,” Harvey says, and that was still the case even when he began doing research in the area. “In 13 years, I’ve seen almost all of these rainforests just removed. There’s almost none at all anymore, in that short period of time.”</p><p>Because peat is composed almost entirely of organic carbon, measuring how much that land has subsided provides a direct measure of the amount of carbon that has been released into the atmosphere. Unlike other kinds of subsidence seen in drier ecosystems, which can result from compaction of soil, in this case the missing depth of peat reflects matter that has actually been decomposed and lost to the air. “It’s not just compaction. It’s actually mass loss. So measuring rates of subsidence is basically equivalent to measuring emissions of carbon dioxide,” says Harvey, who is also a principal investigator at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore.&nbsp;</p><p>Some analysts had previously thought that the draining of peatland forests to make way for palm oil plantations was the major cause of peatland loss, but the new study shows that subsidence is widespread across peatlands under a diverse set of land uses. This subsidence is driven by the drainage of tropical peatlands, mostly for &nbsp;the expansion of agriculture, as well as from other causes, such as creating canals for floating timber out of the forests, and digging drainage ditches alongside roads, which can drain large surrounding areas. All of these factors, it turns out, have contributed significantly to the extreme loss of peatlands in the region.</p><p>One longstanding controversy that this new research could help to address is how long the peatland subsidence continues after the lands are drained. Plantation owners have said that this is temporary and the land quickly stabilizes, while some conservation advocates say the process continues, leaving large regions highly vulnerable to flooding as sea levels rise, since most of these lands are only slightly above sea level. The new data suggest that the rate of subsidence continues over time, though the rate does slow down.</p><p>The satellite measurements used for this study were gathered between 2007 and 2011 using a method called Interferometric Synthetic Aperture Radar (InSAR), which can detect changes in surface elevation with an accuracy of centimeters or even millimeters. Though the satellites that produced these data sets are no longer in operation, newer Japanese satellites are now gathering similar data, and the team hopes to do followup studies using some of the newer data.</p><p>“This is definitely a proof of concept on how satellite data can help us understand environmental changes happening across the whole region,” Hoyt says. That could help in monitoring regional greenhouse gas output, but could also help in implementing and monitoring local regulations on land use. “This has really exciting management implications, because it could allow us to verify management practices and track hotspots of subsidence,” she says.</p><p>While there has been little interest in the region in curbing peatland drainage in order to curb greenhouse gas emissions, the serious risk of uncontrollable fires in these dried peatlands provides a strong motivation to try to preserve and restore these ecosystems, Harvey says. “These plumes of smoke that engulf the region are a problem that everyone there recognizes.”</p> <p>“This new approach … allows peat subsidence to be easily monitored over very large spatial extents and a diversity of settings that would be impossible using other approaches,” says David Wardle,&nbsp;the&nbsp;Smithsonian Professor of Forest Ecology&nbsp;at Nanyang&nbsp;Technological&nbsp;University in Singapore,&nbsp;who was not associated with this research. “It is in my opinion an important breakthrough that moves forward our understanding of the serious environmental problems that have emerged from peat forest clearing and its conversion and degradation, and, alarmingly, highlights that the problems are worse than we thought they were.”</p><p>The research team also included Estelle Chaussard of the University of Oregon and Sandra Seppalainen ’16. The work was supported by the National Research Foundation, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, the Singapore-MIT Alliance for Research and Technology (SMART), the National Science Foundation, and MIT’s Environmental Solutions Initiative.</p> In this photo, Alison Hoyt stands on top of a log during a research trip in a peat swamp forest in Borneo. Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. Courtesy of Alison Hoyt https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Solar geoengineering proposals will weaken extratropical storm tracks in both hemispheres, scientists find. Tue, 02 Jun 2020 09:10:35 -0400 https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Jennifer Chu | MIT News Office <p>How can the world combat the continued rise in global temperatures? How about shading the Earth from a portion of the sun’s heat by injecting the stratosphere with reflective aerosols? After all, volcanoes do essentially the same thing, albeit in short, dramatic bursts: When a Vesuvius erupts, it blasts fine ash into the atmosphere, where the particles can linger as a kind of cloud cover, reflecting solar radiation back into space and temporarily cooling the planet.</p><p>Some researchers are exploring proposals to engineer similar effects, for example by launching reflective aerosols into the stratosphere — via planes, balloons, and even blimps — in order to block the sun’s heat and counteract global warming. But such solar geoengineering schemes, as they are known, could have other long-lasting effects on the climate.</p><p>Now scientists at MIT have found that solar geoengineering would significantly change extratropical storm tracks — the zones in the middle and high latitudes where storms form year-round and are steered by the jet stream across the oceans and land. Extratropical storm tracks give rise to extratropical cyclones, and not their tropical cousins, hurricanes. The strength of extratropical storm tracks determines the severity and frequency of storms such as nor’easters in the United States.</p><p>The team considered an idealized scenario in which solar radiation was reflected enough to offset the warming that would occur if carbon dioxide were to quadruple in concentration. In a number of global climate models under this scenario, the strength of storm tracks in both the northern and southern hemispheres weakened significantly in response.</p><p>Weakened storm tracks would mean less powerful winter storms, but the team cautions that weaker storm tracks also lead to stagnant conditions, particularly in summer, and less wind to clear away air pollution. Changes in winds could also affect the circulation of ocean waters and, in turn, the stability of ice sheets.</p><p>“About half the world’s population lives in the extratropical regions where storm tracks dominate weather,” says Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our results show that solar geoengineering will not simply reverse climate change. Instead, it has the potential itself to induce novel changes in climate.”</p><p>Gertler and his colleagues have published their results this week in the journal <em>Geophysical Research Letters</em>. Co-authors include EAPS Professor Paul O’Gorman, along with Ben Kravitz of Indiana University, John Moore of Beijing Normal University, Steven Phipps of the University of Tasmania, and Shingo Watanabe of the Japan Agency for Marine-Earth Science and Technology</p><p><strong>A not-so-sunny picture</strong></p><p>Scientists have previously modeled what Earth’s climate might look like if solar geoengineering scenarios were to play out on a global scale, with mixed results. On the one hand, spraying aerosols into the stratosphere would reduce incoming solar heat and, to a degree, counteract the warming caused by carbon dioxide emissions. On the other hand, such cooling of the planet would not prevent other greenhouse gas-induced effects such as regional reductions in rainfall and ocean acidification.</p><p>There have also been signs that intentionally reducing solar radiation would shrink the temperature difference between the Earth’s equator and poles or, in climate parlance, weaken the planet’s meridional temperature gradient, cooling the equator while the poles continue to warm. This last consequence was especially intriguing to Gertler and O’Gorman.</p><p>“Storm tracks feed off of meridional temperature gradients, and storm tracks are interesting because they help us to understand weather extremes,” Gertler says. “So we were interested in how geoengineering affects storm tracks.” &nbsp;</p><p>The team looked at how extratropical storm tracks might change under a scenario of solar geoengineering known to climate scientists as experiment G1 of the Geoengineering Model Intercomparison Project (GeoMIP), a project that provides various geoengineering scenarios for scientists to run on climate models to assess their various climate effects.</p><p>The G1 experiment assumes an idealized scenario in which a solar geoengineering scheme blocks enough solar radiation to counterbalance the warming that would occur if carbon dioxide concentrations were to quadruple.</p><p>The researchers used results from various climate models run forward in time under the conditions of the G1 experiment. They also used results from a more sophisticated geoengineering scenario with doubling of carbon dioxide concentrations and aerosols injected into the stratosphere at more than one latitude. In each model they recorded the day-to-day change in air pressure at sea level pressure at various locations along the storm tracks. These changes reflect the passage of storms and measure a storm track’s energy.</p><p>“If we look at the variance in sea level pressure, we have a sense of how often and how strongly cyclones pass over each area,” Gertler explains. “We then average the variance across the whole extratropical region, to get an average value of storm track strength for the northern and southern hemispheres.”</p><p><strong>An imperfect counterbalance</strong></p><p>Their results, across climate models, showed that solar geoengineering would weaken storm tracks in both Northern and Southern hemispheres. Depending on the scenario they considered, the storm track in the Northern Hemisphere would be 5 to 17 percent weaker than it is today.</p><p>“A weakened storm track, in both hemispheres, would mean weaker winter storms but also lead to more stagnant weather, which could affect heat waves,” Gertler says. “Across all seasons, this could affect ventilation of air pollution. It also may contribute to a weakening of the hydrological cycle, with regional reductions in rainfall. These are not good changes, compared to a baseline climate that we are used to.”</p><p>The researchers were curious to see how the same storm tracks would respond to just global warming alone, without the addition of social geoengineering, so they ran the climate models again under several warming-only scenarios. Surprisingly, they found that, in the northern hemisphere, global warming would also weaken storm tracks, by the same magnitude as with the addition of solar geoengineering. This suggests solar geoengineering, and efforts to cool the Earth by reducing incoming heat, would not do much to alter global warming’s effects, at least on storm tracks — a puzzling outcome that the researchers are unsure how to explain.</p><p>In the Southern Hemisphere, there is a slightly different story. They found that global warming alone would strengthen storm tracks there, whereas the addition of solar geoengineering would prevent this strengthening, and even further, would weaken the storm tracks there.</p><p>“In the Southern Hemisphere, winds drive ocean circulation, which in turn could affect uptake of carbon dioxide, and&nbsp; the stability of the Antarctic ice sheet,” O’Gorman adds. “So how storm tracks change over the Southern Hemisphere is quite important.”</p><p>The team also observed that the weakening of storm tracks was strongly correlated with changes in temperature and humidity. Specifically, the climate models showed that in response to reduced incoming solar radiation, the equator cooled significantly as the poles continued to warm. This reduced temperature gradient appears to be sufficient to explain the weakening storm tracks — a result that the group is the first to demonstrate.</p><p>“This work highlights that solar geoengineering is not reversing climate change, but is substituting one unprecedented climate state for another,” Gertler says. “Reflecting sunlight isn’t a perfect counterbalance to the greenhouse effect.”</p><p>Adds O’Gorman: “There are multiple reasons to avoid doing this, and instead to favor reducing emissions of CO<sub>2</sub> and other greenhouse gases.”</p><p>This research was funded, in part, by the National Science Foundation, NASA, and the Industry and Foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change.</p> MIT researchers find that extratropical storm tracks — the blue regions of storminess in the Earth’s middle latitudes — would change significantly with solar geoengineering efforts. Image: Courtesy of the researchers https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Solar geoengineering proposals will weaken extratropical storm tracks in both hemispheres, scientists find. Tue, 02 Jun 2020 09:10:35 -0400 https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Jennifer Chu | MIT News Office <p>How can the world combat the continued rise in global temperatures? How about shading the Earth from a portion of the sun’s heat by injecting the stratosphere with reflective aerosols? After all, volcanoes do essentially the same thing, albeit in short, dramatic bursts: When a Vesuvius erupts, it blasts fine ash into the atmosphere, where the particles can linger as a kind of cloud cover, reflecting solar radiation back into space and temporarily cooling the planet.</p><p>Some researchers are exploring proposals to engineer similar effects, for example by launching reflective aerosols into the stratosphere — via planes, balloons, and even blimps — in order to block the sun’s heat and counteract global warming. But such solar geoengineering schemes, as they are known, could have other long-lasting effects on the climate.</p><p>Now scientists at MIT have found that solar geoengineering would significantly change extratropical storm tracks — the zones in the middle and high latitudes where storms form year-round and are steered by the jet stream across the oceans and land. Extratropical storm tracks give rise to extratropical cyclones, and not their tropical cousins, hurricanes. The strength of extratropical storm tracks determines the severity and frequency of storms such as nor’easters in the United States.</p><p>The team considered an idealized scenario in which solar radiation was reflected enough to offset the warming that would occur if carbon dioxide were to quadruple in concentration. In a number of global climate models under this scenario, the strength of storm tracks in both the northern and southern hemispheres weakened significantly in response.</p><p>Weakened storm tracks would mean less powerful winter storms, but the team cautions that weaker storm tracks also lead to stagnant conditions, particularly in summer, and less wind to clear away air pollution. Changes in winds could also affect the circulation of ocean waters and, in turn, the stability of ice sheets.</p><p>“About half the world’s population lives in the extratropical regions where storm tracks dominate weather,” says Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our results show that solar geoengineering will not simply reverse climate change. Instead, it has the potential itself to induce novel changes in climate.”</p><p>Gertler and his colleagues have published their results this week in the journal <em>Geophysical Research Letters</em>. Co-authors include EAPS Professor Paul O’Gorman, along with Ben Kravitz of Indiana University, John Moore of Beijing Normal University, Steven Phipps of the University of Tasmania, and Shingo Watanabe of the Japan Agency for Marine-Earth Science and Technology</p><p><strong>A not-so-sunny picture</strong></p><p>Scientists have previously modeled what Earth’s climate might look like if solar geoengineering scenarios were to play out on a global scale, with mixed results. On the one hand, spraying aerosols into the stratosphere would reduce incoming solar heat and, to a degree, counteract the warming caused by carbon dioxide emissions. On the other hand, such cooling of the planet would not prevent other greenhouse gas-induced effects such as regional reductions in rainfall and ocean acidification.</p><p>There have also been signs that intentionally reducing solar radiation would shrink the temperature difference between the Earth’s equator and poles or, in climate parlance, weaken the planet’s meridional temperature gradient, cooling the equator while the poles continue to warm. This last consequence was especially intriguing to Gertler and O’Gorman.</p><p>“Storm tracks feed off of meridional temperature gradients, and storm tracks are interesting because they help us to understand weather extremes,” Gertler says. “So we were interested in how geoengineering affects storm tracks.” &nbsp;</p><p>The team looked at how extratropical storm tracks might change under a scenario of solar geoengineering known to climate scientists as experiment G1 of the Geoengineering Model Intercomparison Project (GeoMIP), a project that provides various geoengineering scenarios for scientists to run on climate models to assess their various climate effects.</p><p>The G1 experiment assumes an idealized scenario in which a solar geoengineering scheme blocks enough solar radiation to counterbalance the warming that would occur if carbon dioxide concentrations were to quadruple.</p><p>The researchers used results from various climate models run forward in time under the conditions of the G1 experiment. They also used results from a more sophisticated geoengineering scenario with doubling of carbon dioxide concentrations and aerosols injected into the stratosphere at more than one latitude. In each model they recorded the day-to-day change in air pressure at sea level pressure at various locations along the storm tracks. These changes reflect the passage of storms and measure a storm track’s energy.</p><p>“If we look at the variance in sea level pressure, we have a sense of how often and how strongly cyclones pass over each area,” Gertler explains. “We then average the variance across the whole extratropical region, to get an average value of storm track strength for the northern and southern hemispheres.”</p><p><strong>An imperfect counterbalance</strong></p><p>Their results, across climate models, showed that solar geoengineering would weaken storm tracks in both Northern and Southern hemispheres. Depending on the scenario they considered, the storm track in the Northern Hemisphere would be 5 to 17 percent weaker than it is today.</p><p>“A weakened storm track, in both hemispheres, would mean weaker winter storms but also lead to more stagnant weather, which could affect heat waves,” Gertler says. “Across all seasons, this could affect ventilation of air pollution. It also may contribute to a weakening of the hydrological cycle, with regional reductions in rainfall. These are not good changes, compared to a baseline climate that we are used to.”</p><p>The researchers were curious to see how the same storm tracks would respond to just global warming alone, without the addition of social geoengineering, so they ran the climate models again under several warming-only scenarios. Surprisingly, they found that, in the northern hemisphere, global warming would also weaken storm tracks, by the same magnitude as with the addition of solar geoengineering. This suggests solar geoengineering, and efforts to cool the Earth by reducing incoming heat, would not do much to alter global warming’s effects, at least on storm tracks — a puzzling outcome that the researchers are unsure how to explain.</p><p>In the Southern Hemisphere, there is a slightly different story. They found that global warming alone would strengthen storm tracks there, whereas the addition of solar geoengineering would prevent this strengthening, and even further, would weaken the storm tracks there.</p><p>“In the Southern Hemisphere, winds drive ocean circulation, which in turn could affect uptake of carbon dioxide, and&nbsp; the stability of the Antarctic ice sheet,” O’Gorman adds. “So how storm tracks change over the Southern Hemisphere is quite important.”</p><p>The team also observed that the weakening of storm tracks was strongly correlated with changes in temperature and humidity. Specifically, the climate models showed that in response to reduced incoming solar radiation, the equator cooled significantly as the poles continued to warm. This reduced temperature gradient appears to be sufficient to explain the weakening storm tracks — a result that the group is the first to demonstrate.</p><p>“This work highlights that solar geoengineering is not reversing climate change, but is substituting one unprecedented climate state for another,” Gertler says. “Reflecting sunlight isn’t a perfect counterbalance to the greenhouse effect.”</p><p>Adds O’Gorman: “There are multiple reasons to avoid doing this, and instead to favor reducing emissions of CO<sub>2</sub> and other greenhouse gases.”</p><p>This research was funded, in part, by the National Science Foundation, NASA, and the Industry and Foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change.</p> MIT researchers find that extratropical storm tracks — the blue regions of storminess in the Earth’s middle latitudes — would change significantly with solar geoengineering efforts. Image: Courtesy of the researchers https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Solar geoengineering proposals will weaken extratropical storm tracks in both hemispheres, scientists find. Tue, 02 Jun 2020 09:10:35 -0400 https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Jennifer Chu | MIT News Office <p>How can the world combat the continued rise in global temperatures? How about shading the Earth from a portion of the sun’s heat by injecting the stratosphere with reflective aerosols? After all, volcanoes do essentially the same thing, albeit in short, dramatic bursts: When a Vesuvius erupts, it blasts fine ash into the atmosphere, where the particles can linger as a kind of cloud cover, reflecting solar radiation back into space and temporarily cooling the planet.</p><p>Some researchers are exploring proposals to engineer similar effects, for example by launching reflective aerosols into the stratosphere — via planes, balloons, and even blimps — in order to block the sun’s heat and counteract global warming. But such solar geoengineering schemes, as they are known, could have other long-lasting effects on the climate.</p><p>Now scientists at MIT have found that solar geoengineering would significantly change extratropical storm tracks — the zones in the middle and high latitudes where storms form year-round and are steered by the jet stream across the oceans and land. Extratropical storm tracks give rise to extratropical cyclones, and not their tropical cousins, hurricanes. The strength of extratropical storm tracks determines the severity and frequency of storms such as nor’easters in the United States.</p><p>The team considered an idealized scenario in which solar radiation was reflected enough to offset the warming that would occur if carbon dioxide were to quadruple in concentration. In a number of global climate models under this scenario, the strength of storm tracks in both the northern and southern hemispheres weakened significantly in response.</p><p>Weakened storm tracks would mean less powerful winter storms, but the team cautions that weaker storm tracks also lead to stagnant conditions, particularly in summer, and less wind to clear away air pollution. Changes in winds could also affect the circulation of ocean waters and, in turn, the stability of ice sheets.</p><p>“About half the world’s population lives in the extratropical regions where storm tracks dominate weather,” says Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our results show that solar geoengineering will not simply reverse climate change. Instead, it has the potential itself to induce novel changes in climate.”</p><p>Gertler and his colleagues have published their results this week in the journal <em>Geophysical Research Letters</em>. Co-authors include EAPS Professor Paul O’Gorman, along with Ben Kravitz of Indiana University, John Moore of Beijing Normal University, Steven Phipps of the University of Tasmania, and Shingo Watanabe of the Japan Agency for Marine-Earth Science and Technology</p><p><strong>A not-so-sunny picture</strong></p><p>Scientists have previously modeled what Earth’s climate might look like if solar geoengineering scenarios were to play out on a global scale, with mixed results. On the one hand, spraying aerosols into the stratosphere would reduce incoming solar heat and, to a degree, counteract the warming caused by carbon dioxide emissions. On the other hand, such cooling of the planet would not prevent other greenhouse gas-induced effects such as regional reductions in rainfall and ocean acidification.</p><p>There have also been signs that intentionally reducing solar radiation would shrink the temperature difference between the Earth’s equator and poles or, in climate parlance, weaken the planet’s meridional temperature gradient, cooling the equator while the poles continue to warm. This last consequence was especially intriguing to Gertler and O’Gorman.</p><p>“Storm tracks feed off of meridional temperature gradients, and storm tracks are interesting because they help us to understand weather extremes,” Gertler says. “So we were interested in how geoengineering affects storm tracks.” &nbsp;</p><p>The team looked at how extratropical storm tracks might change under a scenario of solar geoengineering known to climate scientists as experiment G1 of the Geoengineering Model Intercomparison Project (GeoMIP), a project that provides various geoengineering scenarios for scientists to run on climate models to assess their various climate effects.</p><p>The G1 experiment assumes an idealized scenario in which a solar geoengineering scheme blocks enough solar radiation to counterbalance the warming that would occur if carbon dioxide concentrations were to quadruple.</p><p>The researchers used results from various climate models run forward in time under the conditions of the G1 experiment. They also used results from a more sophisticated geoengineering scenario with doubling of carbon dioxide concentrations and aerosols injected into the stratosphere at more than one latitude. In each model they recorded the day-to-day change in air pressure at sea level pressure at various locations along the storm tracks. These changes reflect the passage of storms and measure a storm track’s energy.</p><p>“If we look at the variance in sea level pressure, we have a sense of how often and how strongly cyclones pass over each area,” Gertler explains. “We then average the variance across the whole extratropical region, to get an average value of storm track strength for the northern and southern hemispheres.”</p><p><strong>An imperfect counterbalance</strong></p><p>Their results, across climate models, showed that solar geoengineering would weaken storm tracks in both Northern and Southern hemispheres. Depending on the scenario they considered, the storm track in the Northern Hemisphere would be 5 to 17 percent weaker than it is today.</p><p>“A weakened storm track, in both hemispheres, would mean weaker winter storms but also lead to more stagnant weather, which could affect heat waves,” Gertler says. “Across all seasons, this could affect ventilation of air pollution. It also may contribute to a weakening of the hydrological cycle, with regional reductions in rainfall. These are not good changes, compared to a baseline climate that we are used to.”</p><p>The researchers were curious to see how the same storm tracks would respond to just global warming alone, without the addition of social geoengineering, so they ran the climate models again under several warming-only scenarios. Surprisingly, they found that, in the northern hemisphere, global warming would also weaken storm tracks, by the same magnitude as with the addition of solar geoengineering. This suggests solar geoengineering, and efforts to cool the Earth by reducing incoming heat, would not do much to alter global warming’s effects, at least on storm tracks — a puzzling outcome that the researchers are unsure how to explain.</p><p>In the Southern Hemisphere, there is a slightly different story. They found that global warming alone would strengthen storm tracks there, whereas the addition of solar geoengineering would prevent this strengthening, and even further, would weaken the storm tracks there.</p><p>“In the Southern Hemisphere, winds drive ocean circulation, which in turn could affect uptake of carbon dioxide, and&nbsp; the stability of the Antarctic ice sheet,” O’Gorman adds. “So how storm tracks change over the Southern Hemisphere is quite important.”</p><p>The team also observed that the weakening of storm tracks was strongly correlated with changes in temperature and humidity. Specifically, the climate models showed that in response to reduced incoming solar radiation, the equator cooled significantly as the poles continued to warm. This reduced temperature gradient appears to be sufficient to explain the weakening storm tracks — a result that the group is the first to demonstrate.</p><p>“This work highlights that solar geoengineering is not reversing climate change, but is substituting one unprecedented climate state for another,” Gertler says. “Reflecting sunlight isn’t a perfect counterbalance to the greenhouse effect.”</p><p>Adds O’Gorman: “There are multiple reasons to avoid doing this, and instead to favor reducing emissions of CO<sub>2</sub> and other greenhouse gases.”</p><p>This research was funded, in part, by the National Science Foundation, NASA, and the Industry and Foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change.</p> MIT researchers find that extratropical storm tracks — the blue regions of storminess in the Earth’s middle latitudes — would change significantly with solar geoengineering efforts. Image: Courtesy of the researchers https://news.mit.edu/2020/machine-learning-map-ocean-0529 An MIT-developed technique could aid in tracking the ocean’s health and productivity. Fri, 29 May 2020 14:00:00 -0400 https://news.mit.edu/2020/machine-learning-map-ocean-0529 Jennifer Chu | MIT News Office <p>On land, it’s fairly obvious where one ecological region ends and another begins, for instance at the boundary between a desert and savanna. In the ocean, much of life is microscopic and far more mobile, making it challenging for scientists to map the boundaries between ecologically distinct marine regions.</p><p>One way scientists delineate marine communities is through satellite images of chlorophyll, the green pigment produced by phytoplankton. Chlorophyll concentrations can indicate how rich or productive the underlying ecosystem might be in one region versus another. But chlorophyll maps can only give an idea of the total amount of life that might be present in a given region. Two regions with the same concentration of chlorophyll may in fact host very different combinations of plant and animal life.</p><p>“It’s like if you were to look at all the regions on land that don’t have a lot of biomass, that would include Antarctica and the Sahara, even though they have completely different ecological assemblages,” says Maike Sonnewald, a former postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.</p><p>Now Sonnewald and her colleagues at MIT have developed an unsupervised machine-learning technique that automatically combs through a highly complicated set of global ocean data to find commonalities between marine locations, based on their ratios and interactions between multiple phytoplankton species. With their technique, the researchers found that the ocean can be split into over 100 types of “provinces” that are distinct in their ecological makeup. Any given location in the ocean would conceivably fit into one of these 100 ecological&nbsp; provinces.</p><p>The researchers then looked for similarities between these 100 provinces, ultimately grouping them into 12 more general categories. From these “megaprovinces,” they were able to see that, while some had the same total amount of life within a region, they had very different community structures, or balances of animal and plant species. Sonnewald says capturing these ecological subtleties is essential to tracking the ocean’s health and productivity.</p><p>“Ecosystems are changing with climate change, and the community structure needs to be monitored to understand knock on effects on fisheries and the ocean’s capacity to draw down carbon dioxide,” Sonnewald says. “We can’t fully understand these vital dynamics with conventional methods, that to date don’t include the ecology that’s there. But our method, combined with satellite data and other tools, could offer important progress.”</p><p></p> <p>Sonnewald, who is now an associate research scholar at Princeton University and a visitor at the University of Washington, has reported the results today in the journal <em>Science Advances</em>. Her coauthors at MIT are Senior Research Scientist Stephanie Dutkiewitz, Principal Research Engineer Christopher Hill, and Research Scientist Gael Forget.</p><p><strong>Rolling out a data ball</strong></p><p>The team’s new machine learning technique, which they’ve named SAGE, for the Systematic AGgregated Eco-province method, is designed to take large, complicated datasets, and probabilistically project that data down to a simpler, lower-dimensional dataset.</p><p>“It’s like making cookies,” Sonnewald says. “You take this horrifically complicated ball of data and roll it out to reveal its elements.”</p><p>In particular, the researchers used a clustering algorithm that Sonnewald says is designed to “crawl along a dataset” and hone in on regions with a large density of points — a sign that these points share something in common.&nbsp;</p><p>Sonnewald and her colleagues set this algorithm loose on ocean data from MIT’s Darwin Project, a three-dimensional model of the global ocean that combines a model of the ocean’s climate, including wind, current, and temperature patterns, with an ocean ecology model. That model includes 51 species of phytoplankton and the ways in which each species grows and interacts with each other as well as with the surrounding climate and available nutrients.</p><p>If one were to try and look through this very complicated, 51-layered space of data, for every available point in the ocean, to see which points share common traits, Sonnewald says the task would be “humanly intractable.” With the team’s unsupervised machine learning algorithm, such commonalities “begin to crystallize out a bit.”</p><p>This first “data cleaning” step in the team’s SAGE method was able to parse the global ocean into about 100 different ecological provinces, each with a distinct balance of species.</p><p>The researchers assigned each available location in the ocean model to one of the 100 provinces, and assigned a color to each province. They then generated a map of the global ocean, colorized by province type. &nbsp;</p><p>“In the Southern Ocean around Antarctica, there’s burgundy and orange colors that are shaped how we expect them, in these zonal streaks that encircle Antarctica,” Sonnewald says. “Together with other features, this gives us a lot of confidence that our method works and makes sense, at least in the model.”</p><p><strong>Ecologies unified</strong></p><p>The team then looked for ways to further simplify the more than 100 provinces they identified, to see whether they could pick out commonalities even among these ecologically distinct regions.</p><p>“We started thinking about things like, how are groups of people distinguished from each other? How do we see how connected to each other we are? And we used this type of intuition to see if we could quantify how ecologically similar different provinces are,” Sonnewald says.</p><p>To do this, the team applied techniques from graph theory to represent all 100 provinces in a single graph, according to biomass — a measure that’s analogous to the amount of chlorophyll produced in a region. They chose to group the 100 provinces into 12 general categories, or “megaprovinces.” When they compared these megaprovinces, they found that those that had a similar biomass were composed of very different biological species.</p><p>“For instance, provinces D and K have almost the same amount of biomass, but when we look deeper, K has diatoms and hardly any prokaryotes, while D has hardly any diatoms, and a lot of prokaryotes. But from a satellite, they could look the same,” Sonnewald says. “So our method could start the process of adding the ecological information to bulk chlorophyll measures, and ultimately aid observations.”</p><p>The team has developed an online widget that researchers can use to find other similarities among the 100 provinces. In their paper, Sonnewald’s colleagues chose to group the provinces into 12 categories. But others may want to divide the provinces into more groups, and drill down into the data to see what traits are shared among these groups.</p><p>Sonnewald is sharing the tool with oceanographers who want to identify precisely where regions of a particular ecological makeup are located, so they could, for example, send ships to sample in those regions, and not in others where the balance of species might be slightly different.</p><p>“Instead of guiding sampling with tools based on bulk chlorophyll, and guessing where the interesting ecology could be found with this method, you can surgically go in and say, ‘this is what the model says you might find here,’” Sonnewald says. “Knowing what species assemblages are where, for things like ocean science and global fisheries, is really powerful.”</p><p>This research was funded, in part, by NASA and the Jet Propulsion Laboratory.</p> A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on interactions between phytoplankton species. Using this approach, researchers have determined that the ocean can be split into over 100 types of “provinces,” and 12 “megaprovinces,” that are distinct in their ecological makeup. Image: Courtesy of the researchers, edited by MIT News. https://news.mit.edu/2020/machine-learning-map-ocean-0529 An MIT-developed technique could aid in tracking the ocean’s health and productivity. Fri, 29 May 2020 14:00:00 -0400 https://news.mit.edu/2020/machine-learning-map-ocean-0529 Jennifer Chu | MIT News Office <p>On land, it’s fairly obvious where one ecological region ends and another begins, for instance at the boundary between a desert and savanna. In the ocean, much of life is microscopic and far more mobile, making it challenging for scientists to map the boundaries between ecologically distinct marine regions.</p><p>One way scientists delineate marine communities is through satellite images of chlorophyll, the green pigment produced by phytoplankton. Chlorophyll concentrations can indicate how rich or productive the underlying ecosystem might be in one region versus another. But chlorophyll maps can only give an idea of the total amount of life that might be present in a given region. Two regions with the same concentration of chlorophyll may in fact host very different combinations of plant and animal life.</p><p>“It’s like if you were to look at all the regions on land that don’t have a lot of biomass, that would include Antarctica and the Sahara, even though they have completely different ecological assemblages,” says Maike Sonnewald, a former postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.</p><p>Now Sonnewald and her colleagues at MIT have developed an unsupervised machine-learning technique that automatically combs through a highly complicated set of global ocean data to find commonalities between marine locations, based on their ratios and interactions between multiple phytoplankton species. With their technique, the researchers found that the ocean can be split into over 100 types of “provinces” that are distinct in their ecological makeup. Any given location in the ocean would conceivably fit into one of these 100 ecological&nbsp; provinces.</p><p>The researchers then looked for similarities between these 100 provinces, ultimately grouping them into 12 more general categories. From these “megaprovinces,” they were able to see that, while some had the same total amount of life within a region, they had very different community structures, or balances of animal and plant species. Sonnewald says capturing these ecological subtleties is essential to tracking the ocean’s health and productivity.</p><p>“Ecosystems are changing with climate change, and the community structure needs to be monitored to understand knock on effects on fisheries and the ocean’s capacity to draw down carbon dioxide,” Sonnewald says. “We can’t fully understand these vital dynamics with conventional methods, that to date don’t include the ecology that’s there. But our method, combined with satellite data and other tools, could offer important progress.”</p><p></p> <p>Sonnewald, who is now an associate research scholar at Princeton University and a visitor at the University of Washington, has reported the results today in the journal <em>Science Advances</em>. Her coauthors at MIT are Senior Research Scientist Stephanie Dutkiewitz, Principal Research Engineer Christopher Hill, and Research Scientist Gael Forget.</p><p><strong>Rolling out a data ball</strong></p><p>The team’s new machine learning technique, which they’ve named SAGE, for the Systematic AGgregated Eco-province method, is designed to take large, complicated datasets, and probabilistically project that data down to a simpler, lower-dimensional dataset.</p><p>“It’s like making cookies,” Sonnewald says. “You take this horrifically complicated ball of data and roll it out to reveal its elements.”</p><p>In particular, the researchers used a clustering algorithm that Sonnewald says is designed to “crawl along a dataset” and hone in on regions with a large density of points — a sign that these points share something in common.&nbsp;</p><p>Sonnewald and her colleagues set this algorithm loose on ocean data from MIT’s Darwin Project, a three-dimensional model of the global ocean that combines a model of the ocean’s climate, including wind, current, and temperature patterns, with an ocean ecology model. That model includes 51 species of phytoplankton and the ways in which each species grows and interacts with each other as well as with the surrounding climate and available nutrients.</p><p>If one were to try and look through this very complicated, 51-layered space of data, for every available point in the ocean, to see which points share common traits, Sonnewald says the task would be “humanly intractable.” With the team’s unsupervised machine learning algorithm, such commonalities “begin to crystallize out a bit.”</p><p>This first “data cleaning” step in the team’s SAGE method was able to parse the global ocean into about 100 different ecological provinces, each with a distinct balance of species.</p><p>The researchers assigned each available location in the ocean model to one of the 100 provinces, and assigned a color to each province. They then generated a map of the global ocean, colorized by province type. &nbsp;</p><p>“In the Southern Ocean around Antarctica, there’s burgundy and orange colors that are shaped how we expect them, in these zonal streaks that encircle Antarctica,” Sonnewald says. “Together with other features, this gives us a lot of confidence that our method works and makes sense, at least in the model.”</p><p><strong>Ecologies unified</strong></p><p>The team then looked for ways to further simplify the more than 100 provinces they identified, to see whether they could pick out commonalities even among these ecologically distinct regions.</p><p>“We started thinking about things like, how are groups of people distinguished from each other? How do we see how connected to each other we are? And we used this type of intuition to see if we could quantify how ecologically similar different provinces are,” Sonnewald says.</p><p>To do this, the team applied techniques from graph theory to represent all 100 provinces in a single graph, according to biomass — a measure that’s analogous to the amount of chlorophyll produced in a region. They chose to group the 100 provinces into 12 general categories, or “megaprovinces.” When they compared these megaprovinces, they found that those that had a similar biomass were composed of very different biological species.</p><p>“For instance, provinces D and K have almost the same amount of biomass, but when we look deeper, K has diatoms and hardly any prokaryotes, while D has hardly any diatoms, and a lot of prokaryotes. But from a satellite, they could look the same,” Sonnewald says. “So our method could start the process of adding the ecological information to bulk chlorophyll measures, and ultimately aid observations.”</p><p>The team has developed an online widget that researchers can use to find other similarities among the 100 provinces. In their paper, Sonnewald’s colleagues chose to group the provinces into 12 categories. But others may want to divide the provinces into more groups, and drill down into the data to see what traits are shared among these groups.</p><p>Sonnewald is sharing the tool with oceanographers who want to identify precisely where regions of a particular ecological makeup are located, so they could, for example, send ships to sample in those regions, and not in others where the balance of species might be slightly different.</p><p>“Instead of guiding sampling with tools based on bulk chlorophyll, and guessing where the interesting ecology could be found with this method, you can surgically go in and say, ‘this is what the model says you might find here,’” Sonnewald says. “Knowing what species assemblages are where, for things like ocean science and global fisheries, is really powerful.”</p><p>This research was funded, in part, by NASA and the Jet Propulsion Laboratory.</p> A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on interactions between phytoplankton species. Using this approach, researchers have determined that the ocean can be split into over 100 types of “provinces,” and 12 “megaprovinces,” that are distinct in their ecological makeup. Image: Courtesy of the researchers, edited by MIT News. https://news.mit.edu/2020/making-nuclear-energy-cost-competitive-0527 Three MIT teams to explore novel ways to reduce operations and maintenance costs of advanced nuclear reactors. Wed, 27 May 2020 17:00:01 -0400 https://news.mit.edu/2020/making-nuclear-energy-cost-competitive-0527 Department of Nuclear Science and Engineering <p>Nuclear energy is a low-carbon energy source that is vital to decreasing carbon emissions. A critical factor in its continued viability as a future energy source is finding novel and innovative ways to improve operations and maintenance (O&amp;M) costs in the next generation of advanced reactors. The U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) established the Generating Electricity Managed by Intelligent Nuclear Assets (GEMINA) program to do exactly this. Through $27 million in funding, GEMINA is accelerating research, discovery, and development of new digital technologies that would produce effective and sustainable reductions in O&amp;M costs.</p> <p>Three MIT research teams have received APRA-E GEMINA awards to generate critical data and strategies to reduce O&amp;M costs for the next generation of nuclear power plants to make them more economical, flexible, and efficient. The MIT teams include researchers from Department of Nuclear Science and Engineering (NSE), the&nbsp;Department of Civil and Environmental Engineering, and the&nbsp;MIT Nuclear Reactor Laboratory. By leveraging state-of-art in high-fidelity simulations and unique MIT research reactor capabilities, the MIT-led teams will collaborate with leading industry partners with practical O&amp;M experience and automation to support the development of digital twins. Digital twins are virtual replicas of physical systems that are programmed to have the same properties, specifications, and behavioral characteristics as actual systems. The goal is to apply artificial intelligence, advanced control systems, predictive maintenance, and model-based fault detection within the digital twins to inform the design of O&amp;M frameworks for advanced nuclear power plants.</p> <p>In a project focused on developing high-fidelity digital twins for the critical systems in advanced nuclear reactors, NSE professors&nbsp;<a href=”http://web.mit.edu/nse/people/faculty/baglietto.html”>Emilio Baglietto</a>&nbsp;and&nbsp;<a href=”http://web.mit.edu/nse/people/faculty/shirvan.html”>Koroush Shirvan</a>&nbsp;will collaborate with researchers from <a href=”http://https://www.genewsroom.com/press-releases/us-department-energy-awards-two-advanced-reactor-projects-utilizing-bwrx-300-small”>GE Research and GE Hitachi</a>. The GE Hitachi BWRX-300, a small modular reactor designed to provide flexible energy generation, will serve as a reference design. BWRX-300 is a promising small modular reactor concept that aims to be competitive with natural gas to realize market penetration in the United States. The team will assemble, validate, and exercise high-fidelity digital twins of the BWRX-300 systems. Digital twins address mechanical and thermal fatigue failure modes that drive O&amp;M activities well beyond selected BWRX-300 components and extend to all advanced reactors where a flowing fluid is present. The role of high-fidelity resolution is central to the approach, as it addresses the unique challenges of the nuclear industry.</p> <p>NSE will leverage the tremendous advancements they have achieved in recent years to accelerate the transition of the nuclear industry toward high-fidelity simulations in the form of computational fluid dynamics. The high spatial and time resolution accuracy of the simulations, combined with the AI-enabled digital twins, offer the opportunity to deliver predictive maintenance approaches that can greatly reduce the operating cost of nuclear stations.&nbsp;GE Research represents an ideal partner, given their&nbsp;tremendous experience in developing digital twins and close link to GE Hitachi&nbsp;and BWRX-300 design team.&nbsp;This team is particularly well position to tackle regulatory challenges of applying digital twins to safety-grade components&nbsp;through explicit characterization of uncertainties. This three-year MIT-led project is supported by an award of $1,787,065.</p> <p>MIT Principal Research Engineer and Interim Director of the Nuclear Reactor Lab Gordon Kohse will lead a collaboration with MPR Associates to generate critical irradiation data to be used in digital twinning of molten-salt reactors (MSRs). MSRs produce radioactive materials when nuclear fuel is dissolved in a molten salt at high temperature and undergoes fission as it flows through the reactor core. Understanding the behavior of these radioactive materials is important for MSR design and for predicting and reducing O&amp;M costs — a vital step in bringing safe, clean, next-generation nuclear power to market. The MIT-led team will use the MIT nuclear research reactor’s unique capability to provide data to determine how radioactive materials are generated and transported in MSR components. Digital twins of MSRs will require this critical data, which is currently unavailable. The MIT team will monitor radioactivity during and after irradiation of molten salts containing fuel in materials that will be used in MSR construction. Along with Kohse, the MIT research team includes David Carpenter and Kaichao Sun from the MIT Nuclear Reactor Laboratory, and <a href=”http://web.mit.edu/nse/people/research/forsberg.html”>Charles Forsberg</a>&nbsp;and Professor&nbsp;<a href=”http://web.mit.edu/nse/people/faculty/mli.html”>Mingda Li</a>&nbsp;from NSE. Storm Kauffman and the MPR Associates team bring a wealth of nuclear industry experience to the project and will ensure that the data generated aligns with the needs of reactor developers. This two-year project is supported by an award of $899,825.</p> <p>In addition to these two MIT-led projects, a third MIT team will work closely with the&nbsp;Electric Power Research Institute (EPRI) on a new paradigm for reducing advanced reactor O&amp;M. This is a proof-of-concept study that will explore how to move away from the traditional maintenance and repair approach. The EPRI-led project will examine a “replace and refurbish” model in which components are intentionally designed and tested for shorter and more predictable lifetimes with the potential for game-changing O&amp;M cost savings. This approach is similar to that adopted by the commercial airline industry, in which multiple refurbishments — including engine replacement — can keep a jet aircraft flying economically over many decades. The study will evaluate several advanced reactor designs with respect to cost savings and other important economic benefits, such as increased sustainability for suppliers.&nbsp;The MIT team brings together <a href=”https://cshub.mit.edu/jeremy-gregory” target=”_blank”>Jeremy Gregory</a> from the Department of Civil and Environmental Engineering, Lance Snead from the Nuclear Reactor Laboratory, and professors&nbsp;<a href=”http://web.mit.edu/nse/people/faculty/buongiorno.html”>Jacopo Buongiorno</a>&nbsp;and Koroush Shirvan from NSE.&nbsp;</p> <p>“This collaborative project will take a fresh look at reducing the operation and maintenance cost by allowing nuclear technology to better adapt to the ever-changing energy market conditions. MIT’s role is to identify cost-reducing pathways that would be applicable across a range of promising advanced reactor technologies. Particularly,&nbsp;we need to incorporate latest advancements in material science and engineering along with civil structures in our strategies,” says MIT project lead Shirvan.</p> <p>The advances by these three MIT teams, along with the six other awardees in the GEMINA program, will provide a framework for more streamlined O&amp;M costs for next-generation advanced nuclear reactors — a critical factor to being competitive with alternative energy sources.</p> MIT teams in the GEMINA program will provide a framework for more streamlined operations and maintenance costs for next-generation advanced nuclear reactors. Photo: Yakov Ostrovsky https://news.mit.edu/2020/solve-mit-builds-partnerships-tackle-complex-challenges-during-covid-19-crisis-0527 Event convened attendees from around the world to discuss impacts of the pandemic and advance solutions to pressing global problems. Wed, 27 May 2020 16:50:01 -0400 https://news.mit.edu/2020/solve-mit-builds-partnerships-tackle-complex-challenges-during-covid-19-crisis-0527 Andrea Snyder | MIT Solve <p>In response to the Covid-19 pandemic, MIT Solve, a marketplace for social impact innovation, transformed its annual flagship event, Solve at MIT, into an <a href=”http://www.youtube.com/playlist?list=PLale2WVQ6Y9aXVICkw_WnQo8_sajMYxx2″ target=”_blank”>interactive virtual gathering</a> to convene the Solve and MIT communities. On May 12, nearly 800 people tuned in from all around the world to take part in <a href=”https://solve.mit.edu/events/solve-at-mit-2020″ target=”_blank”>Virtual Solve at MIT</a>.</p><p>The event connected innovators and social impact leaders through breakout sessions to discuss Solve’s <a href=”https://solve.mit.edu/challenges”>Global Challenges</a> and other timely topics, brain trusts to advise 2019 <a href=”https://solve.mit.edu/solver_spotlight”>Solver teams</a>, and plenary sessions to feature partnership stories and world-class speakers such as Mary Barra, chairman and CEO of General Motors, in conversation with MIT President L. Rafael Reif; Yo-Yo Ma, world-renowned cellist; and Cady Coleman ’83, former NASA astronaut.&nbsp;</p><p>Throughout the day, many conversations touched on the broad impacts of Covid-19 — and the importance of building partnerships to scale solutions to pressing global problems. Here are some highlights.</p> <p><strong>“We’re all one crew”</strong></p> <p>The opening plenary kicked off with former NASA astronaut Cady Coleman, as she reflected on her time living in space. Looking down at the Earth, she pondered societal divisions and thought about how “we’re all one crew.”&nbsp;&nbsp;</p><p>In the face of Covid-19, Coleman reminded us that despite tragedy, people are finding each other, working together, and getting things done. “There’s a chain to help you go and find those people — you just have to be open to finding [them],” she said. “The solutions are bigger and better together.”</p> <p><strong>Solve partnership stories</strong></p> <p>The theme of partnership resurfaced many times throughout the day, as Solver teams and Solve members shared stories about their work together. “The way that Solve measures its success is in the value and number of partnerships we are able to broker among our community members,” said Hala Hanna, Solve’s managing director of community.&nbsp;</p><p>For example, Merck for Mothers, which works to end preventable maternal death, alongside other partners, has committed $5 million toward The MOMs Initiative, which supports promising health innovators in sub-Saharan Africa and South Asia. The first centerpiece of the initiative is <a href=”https://solve.mit.edu/challenges/frontlines-of-health/solutions/4898″ target=”_blank”>LifeBank</a>, a Frontlines of Health Solver that combines data, smart logistics, and technology to deliver life-saving medical products. LifeBank’s Founder and CEO Temie Giwa-Tuboson and Merck for Mothers’ Executive Director Mary-Ann Etiebet first met in 2018 at <a href=”https://solve.mit.edu/events/solve-challenge-finals-2020″>Solve Challenge Finals</a>.</p><p>Etiebet was impressed by the simplicity — and effectiveness — of LifeBank’s solution. “So often, when we think about the private sector and global health, we think about big global corporations,” she said. “But really, we should be thinking about local, private providers — people who live and work in these communities — and how we can invest in them to drive progress.”</p><p>Later in the day, Abhilasha Purwar, founder and CEO of <a href=”https://solve.mit.edu/challenges/healthy-cities/solutions/8056″>Blue Sky Analytics</a>, and Melinda Marble, executive director of the Patrick J. McGovern Foundation, spoke about their own partnership. In 2019, McGovern Foundation awarded the $200,000 AI Innovations Prize to four Solver teams. Blue Sky Analytics, a Healthy Cities Solver whose AI-powered platform provides key air quality data and source emissions parameters, was one recipient.&nbsp;</p><p>“The AI Innovations Prize is the first seed that an entrepreneur needs to get out there and start to accomplish their dream,” said Purwar. Blue Sky Analytics was selected to receive $300,000 in follow-on funding for the prize as well. Purwar was thrilled to receive this additional funding, which will enable Blue Sky Analytics to hire critical engineering talent to further develop its platform.</p> <p><strong>A stretch break with an Olympian</strong></p> <p>To provide a break from the discussions — and from sitting at home desks — Pita Taufatofua, two-time Tongan Olympian, led the audience in a brief stretching session. “There’s been so much talk about what the world’s going to look like after Covid-19,” he said.&nbsp;</p><p>“But one thing that hasn’t changed is that as human beings, we have to look after each other, and we also have to look after ourselves — physically and mentally.” His simple routine reminded attendees to take a moment for themselves.</p> <p><strong>Leadership during Covid-19</strong></p> <p>Mary Barra, chair and CEO of General Motors (GM), and L. Rafael Reif, president of MIT, then discussed leadership and action in the face of Covid-19. Barra shared the story of how GM partnered with Ventec Life Systems to produce masks and critical care ventilators; Ventec had been making 200-300 ventilators a month, but GM wanted to scale production to well over 10,000 a month.&nbsp;</p><p>To make this happen, GM brought its whole team together. “It was inspiring to see how people volunteered, working 24 hours a day to get the work done,” Barra said.&nbsp;</p><p>“Things that in a normal corporate time speed would have taken months seemed to be getting done in days and weeks,” said Barra. “What can we do to support people so they can move that quickly [all the time]?”</p> <p><strong>Inspiring innovation</strong></p> <p>World-renowned cellist Yo-Yo Ma closed out the event with a moving performance, and described the parallels between music and innovation: developing meaning, managing transitions, and building trust. Ultimately, “the whole point … is service,” he said. “You use your technique to transcend it in order to serve.”</p> <p>You can watch many of these conversations on Solve’s <a href=”https://www.youtube.com/playlist?list=PLale2WVQ6Y9aXVICkw_WnQo8_sajMYxx2″>YouTube channel</a>.</p> Cellist Yo-Yo Ma performed and spoke at Virtual Solve at MIT. Photo courtesy of MIT Solve. https://news.mit.edu/2020/solar-energy-farms-electric-vehicle-batteries-life-0522 Modeling study shows battery reuse systems could be profitable for both electric vehicle companies and grid-scale solar operations. Fri, 22 May 2020 00:00:01 -0400 https://news.mit.edu/2020/solar-energy-farms-electric-vehicle-batteries-life-0522 David L. Chandler | MIT News Office <p>As electric vehicles rapidly grow in popularity worldwide, there will soon be a wave of used batteries whose performance is no longer sufficient for vehicles that need reliable acceleration and range. But a new study shows that these batteries could still have a useful and profitable second life as backup storage for grid-scale solar photovoltaic installations, where they could perform for more than a decade in this less demanding role.</p><p>The study, published in the journal <em>Applied Energy</em>, was carried out by six current and former MIT researchers, including postdoc Ian Mathews and professor of mechanical engineering Tonio Buonassisi, who is head of the Photovoltaics Research Laboratory.</p><p>As a test case, the researchers examined in detail a hypothetical grid-scale solar farm in California. They studied the economics of several scenarios: building a 2.5-megawatt solar farm alone; building the same array along with a new lithium-ion battery storage system; and building it with a battery array made of repurposed EV batteries that had declined to 80 percent of their original capacity, the point at which they would be considered too weak for continued vehicle use.</p><p>They found that the new battery installation would not provide a reasonable net return on investment, but that a properly managed system of used EV batteries could be a good, profitable investment as long as the batteries cost less than 60 percent of their original price.</p> <p><strong>Not so easy</strong></p><p>The process might sound straightforward, and it has occasionally been implemented in smaller-scale projects, but expanding that to grid scale is not simple, Mathews explains. “There are many issues on a technical level. How do you screen batteries when you take them out of the car to make sure they’re good enough to reuse? How do you pack together batteries from different cars in a way that you know that they’ll work well together, and you won’t have one battery that’s much poorer than the others and will drag the performance of the system down?”</p><p>On the economic side, he says, there are also questions: “Are we sure that there’s enough value left in these batteries to justify the cost of taking them from cars, collecting them, checking them over, and repackaging them into a new application?” For the modeled case under California’s local conditions, the answer seems to be a solid yes, the team found.</p><p>The study used a semiempirical model of battery degradation, trained using measured data, to predict capacity fade in these lithium-ion batteries under different operating conditions, and found that the batteries could achieve maximum lifetimes and value by operating under relatively gentle charging and discharging cycles — never going above 65 percent of full charge or below 15 percent. This finding challenges some earlier assumptions that running the batteries at maximum capacity initially would provide the most value.</p><p>“I’ve talked to people who’ve said the best thing to do is just work your battery really hard, and front load all your revenue,” Mathews says. “When we looked at that, it just didn’t make sense at all.” It was clear from the analysis that maximizing the lifetime of the batteries would provide the best returns.</p><p><strong>How long will they last?</strong></p><p>One unknown factor is just how long the batteries can continue to operate usefully in this second application. The study made a conservative assumption, that the batteries would be retired from their solar-farm backup service after they had declined down to 70 percent of their rated capacity, from their initial 80 percent (the point when they were retired from EV use). But it may well be, Mathews says, that continuing to operate down to 60 percent of capacity or even lower might prove to be safe and worthwhile. Longer-term pilot studies will be required to determine that, he says. Many electric vehicle manufacturers are already beginning to do such pilot studies.</p><p>“That’s a whole area of research in itself,” he says, “because the typical battery has multiple degradation pathways. Trying to figure out what happens when you move into this more rapid degradation phase, it’s an active area of research.” In part, the degradation is determined by the way the batteries are controlled. “So, you might actually adapt your control algorithms over the lifetime of the project, to just really push that out as far as possible,” he says. This is one direction the team will pursue in their ongoing research, he says. “We think this could be a great application for machine-learning methods, trying to figure out the kind of intelligent methods and predictive analytics that adjust those control policies over the life of the project.”</p><p>The actual economics of such a project could vary widely depending on the local regulatory and rate-setting structures, he explains. For example, some local rules allow the cost of storage systems to be included in the overall cost of a new renewable energy supply, for rate-setting purposes, and others do not. The economics of such systems will be very site specific, but the California case study is intended to be an illustrative U.S. example.</p><p>“A lot of states are really starting to see the benefit that storage can provide,” Mathews says. “And this just shows that they should have an allowance that somehow incorporates second-life batteries in those regulations. That could be favorable for them.”</p><p>A recent report from McKinsey Corp. shows that as demand for backup storage for renewable energy projects grows between now and 2030, second use EV batteries could potentially meet half of that demand, Mathews says. Some EV companies, he says, including <a href=”http://news.mit.edu/2018/rivian-electric-vehicles-1130″>Rivian</a>, founded by an MIT alumnus, are already designing their battery packs specifically to make this end-of-life repurposing as easy as possible.</p><p>Mathews says that “the point that I made in the paper was that technically, economically, … this could work.” For the next step, he says, “There’s a lot of stakeholders who would need to be involved in this: You need to have your EV manufacturer, your lithium ion battery manufacturer, your solar project developer, the power electronics guys.” The intent, he says, “was to say, ‘Hey, you guys should actually sit down and really look at this, because we think it could really work.’”</p><p>The study team included postdocs Bolum Xu and Wei He, MBA student Vanessa Barreto, and research scientist Ian Marius Peters. The work was supported by the European Union’s Horizon 2020 research program, the DoE-NSF ERF for Quantum Sustainable Solar Technologies (QESST) and the Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology (SMART).</p> An MIT study shows that electrical vehicle batteries could have a useful and profitable second life as backup storage for grid-scale solar photovoltaic installations, where they could perform for more than a decade in this less demanding role. This image shows a ‘cut-away’ view of a lithium-ion battery over a background of cars and solar panels. Image: MIT News https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 A new framework for learning from each other. Thu, 21 May 2020 14:50:01 -0400 https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 Nancy W. Stauffer | MIT Energy Initiative <p>In recent decades, urban populations in China’s cities have grown substantially, and rising incomes have led to a rapid expansion of car ownership. Indeed, China is now the world’s largest market for automobiles. The combination of urbanization and motorization has led to an urgent need for transportation policies to address urban problems such as congestion, air pollution, and greenhouse gas emissions.</p> <p>For the past three years, an MIT team led by <a href=”http://energy.mit.edu/profile/joanna-moody/”>Joanna Moody</a>, research program manager of the MIT Energy Initiative’s <a href=”http://energy.mit.edu/msc/”>Mobility Systems Center</a>, and <a href=”http://energy.mit.edu/profile/jinhua-zhao/”>Jinhua Zhao</a>, the Edward H. and Joyce Linde Associate Professor in the Department of Urban Studies and Planning (DUSP) and director of MIT’s <a href=”https://mobility.mit.edu/”>JTL Urban Mobility Lab</a>, has been examining transportation policy and policymaking in China. “It’s often assumed that transportation policy in China is dictated by the national government,” says Zhao. “But we’ve seen that the national government sets targets and then allows individual cities to decide what policies to implement to meet those targets.”</p> <p>Many studies have investigated transportation policymaking in China’s megacities like Beijing and Shanghai, but few have focused on the hundreds of small- and medium-sized cities located throughout the country. So Moody, Zhao, and their team wanted to consider the process in these overlooked cities. In particular, they asked: how do municipal leaders decide what transportation policies to implement, and can they be better enabled to learn from one another’s experiences? The answers to those questions might provide guidance to municipal decision-makers trying to address the different transportation-related challenges faced by their cities.</p> <p>The answers could also help fill a gap in the research literature. The number and diversity of cities across China has made performing a systematic study of urban transportation policy challenging, yet that topic is of increasing importance. In response to local air pollution and traffic congestion, some Chinese cities are now enacting policies to restrict car ownership and use, and those local policies may ultimately determine whether the unprecedented growth in nationwide private vehicle sales will persist in the coming decades.</p> <p><strong>Policy learning</strong></p> <p>Transportation policymakers worldwide benefit from a practice called policy-learning: Decision-makers in one city look to other cities to see what policies have and haven’t been effective. In China, Beijing and Shanghai are usually viewed as trendsetters in innovative transportation policymaking, and municipal leaders in other Chinese cities turn to those megacities as role models.</p> <p>But is that an effective approach for them? After all, their urban settings and transportation challenges are almost certainly quite different. Wouldn’t it be better if they looked to “peer” cities with which they have more in common?</p> <p>Moody, Zhao, and their DUSP colleagues — postdoc Shenhao Wang and graduate students Jungwoo Chun and Xuenan Ni, all in the JTL Urban Mobility Lab — hypothesized an alternative framework for policy-learning in which cities that share common urbanization and motorization histories would share their policy knowledge. Similar development of city spaces and travel patterns could lead to the same transportation challenges, and therefore to similar needs for transportation policies.</p> <p>To test their hypothesis, the researchers needed to address two questions. To start, they needed to know whether Chinese cities have a limited number of common urbanization and motorization histories. If they grouped the 287 cities in China based on those histories, would they end up with a moderately small number of meaningful groups of peer cities? And second, would the cities in each group have similar transportation policies and priorities?</p> <p><strong>Grouping the cities</strong></p> <p>Cities in China are often grouped into three “tiers” based on political administration, or the types of jurisdictional roles the cities play. Tier 1 includes Beijing, Shanghai, and two other cities that have the same political powers as provinces. Tier 2 includes about 20 provincial capitals. The remaining cities — some 260 of them — all fall into Tier 3. These groupings are not necessarily relevant to the cities’ local urban and transportation conditions.</p> <p>Moody, Zhao, and their colleagues instead wanted to sort the 287 cities based on their urbanization and motorization histories. Fortunately, they had relatively easy access to the data they needed. Every year, the Chinese government requires each city to report well-defined statistics on a variety of measures and to make them public.</p> <p>Among those measures, the researchers chose four indicators of urbanization — gross domestic product per capita, total urban population, urban population density, and road area per capita — and four indicators of motorization — the number of automobiles, taxis, buses, and subway lines per capita. They compiled those data from 2001 to 2014 for each of the 287 cities.</p> <p>The next step was to sort the cities into groups based on those historical datasets — a task they accomplished using a clustering algorithm. For the algorithm to work well, they needed to select parameters that would summarize trends in the time series data for each indicator in each city. They found that they could summarize the 14-year change in each indicator using the mean value and two additional variables: the slope of change over time and the rate at which the slope changes (the acceleration).</p> <p>Based on those data, the clustering algorithm examined different possible numbers of groupings, and four gave the best outcome in terms of the cities’ urbanization and motorization histories. “With four groups, the cities were most similar within each cluster and most different across the clusters,” says Moody. “Adding more groups gave no additional benefit.”</p> <p>The four groups of similar cities are as follows.</p> <p>Cluster 1: 23 large, dense, wealthy megacities that have urban rail systems and high overall mobility levels over all modes, including buses, taxis, and private cars. This cluster encompasses most of the government’s Tier 1 and Tier 2 cities, while the Tier 3 cities are distributed among Clusters 2, 3, and 4.</p> <p>Cluster 2: 41 wealthy cities that don’t have urban rail and therefore are more sprawling, have lower population density, and have auto-oriented travel patterns.</p> <p>Cluster 3: 134 medium-wealth cities that have a low-density urban form and moderate mobility fairly spread across different modes, with limited but emerging car use.</p> <p>Cluster 4:<strong> </strong>89 low-income cities that have generally lower levels of mobility, with some public transit buses but not many roads. Because people usually walk, these cities are concentrated in terms of density and development.</p> <p><strong>City clusters and policy priorities</strong></p> <p>The researchers’ next task was to determine whether the cities within a given cluster have transportation policy priorities that are similar to each other — and also different from those of cities in the other clusters. With no quantitative data to analyze, the researchers needed to look for such patterns using a different approach.</p> <p>First, they selected 44 cities at random (with the stipulation that at least 10 percent of the cities in each cluster had to be represented). They then downloaded the 2017 mayoral report from each of the 44 cities.</p> <p>Those reports highlight the main policy initiatives and directions of the city in the past year, so they include all types of policymaking. To identify the transportation-oriented sections of the reports, the researchers performed keyword searches on terms such as transportation, road, car, bus, and public transit. They extracted any sections highlighting transportation initiatives and manually labeled each of the text segments with one of 21 policy types. They then created a spreadsheet organizing the cities into the four clusters. Finally, they examined the outcome to see whether there were clear patterns within and across clusters in terms of the types of policies they prioritize.</p> <p>“We found strikingly clear patterns in the types of transportation policies adopted within city clusters and clear differences across clusters,” says Moody. “That reinforced our hypothesis that different motorization and urbanization trajectories would be reflected in very different policy priorities.”</p> <p>Here are some highlights of the policy priorities within the clusters.</p> <p>The cities in Cluster 1 have urban rail systems and are starting to consider policies around them. For example, how can they better connect their rail systems with other transportation modes — for instance, by taking steps to integrate them with buses or with walking infrastructure? How can they plan their land use and urban development to be more transit-oriented, such as by providing mixed-use development around the existing rail network?</p> <p>Cluster 2 cities are building urban rail systems, but they’re generally not yet thinking about other policies that can come with rail development. They could learn from Cluster 1 cities about other factors to take into account at the outset. For example, they could develop their urban rail with issues of multi-modality and of transit-oriented development in mind.</p> <p>In Cluster 3 cities, policies tend to emphasize electrifying buses and providing improved and expanded bus service. In these cities with no rail networks, the focus is on making buses work better.</p> <p>Cluster 4 cities are still focused on road development, even within their urban areas. Policy priorities often emphasize connecting the urban core to rural areas and to adjacent cities — steps that will give their populations access to the region as a whole, expanding the opportunities available to them.</p> <p><strong>Benefits of a “mixed method” approach</strong></p> <p>Results of the researchers’ analysis thus support their initial hypothesis. “Different urbanization and motorization trends that we captured in the clustering analysis are reflective of very different transportation priorities,” says Moody. “That match means we can use this approach for further policymaking analysis.”</p> <p>At the outset, she viewed their study as a “proof of concept” for performing transportation policy studies using a mixed-method approach. Mixed-method research involves a blending of quantitative and qualitative approaches. In their case, the former was the mathematical analysis of time series data, and the latter was the in-depth review of city government reports to identify transportation policy priorities. “Mixed-method research is a growing area of interest, and it’s a powerful and valuable tool,” says Moody.</p> <p>She did, however, find the experience of combining the quantitative and qualitative work challenging. “There weren’t many examples of people doing something similar, and that meant that we had to make sure that our quantitative work was defensible, that our qualitative work was defensible, and that the combination of them was defensible and meaningful,” she says.</p> <p>The results of their work confirm that their novel analytical framework could be used in other large, rapidly developing countries with heterogeneous urban areas. “It’s probable that if you were to do this type of analysis for cities in, say, India, you might get a different number of city types, and those city types could be very different from what we got in China,” says Moody. Regardless of the setting, the capabilities provided by this kind of mixed method framework should prove increasingly important as more and more cities around the world begin innovating and learning from one another how to shape sustainable urban transportation systems.</p> <p>This research was supported by the MIT Energy Initiative’s Mobility of the Future study. Information about the study, its participants and supporters, and its publications is available at <a href=”http://energy.mit.edu/research/mobilityofthefuture/”>energy.mit.edu/research/mobilityofthefuture</a>.</p> Using a novel methodology, MITEI researcher Joanna Moody and Associate Professor Jinhua Zhao uncovered patterns in the development trends and transportation policies of China’s 287 cities — including Fengcheng, shown here — that may help decision-makers learn from one another. Photo: blake.thornberry/Flickr https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 A new framework for learning from each other. Thu, 21 May 2020 14:50:01 -0400 https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 Nancy W. Stauffer | MIT Energy Initiative <p>In recent decades, urban populations in China’s cities have grown substantially, and rising incomes have led to a rapid expansion of car ownership. Indeed, China is now the world’s largest market for automobiles. The combination of urbanization and motorization has led to an urgent need for transportation policies to address urban problems such as congestion, air pollution, and greenhouse gas emissions.</p> <p>For the past three years, an MIT team led by <a href=”http://energy.mit.edu/profile/joanna-moody/”>Joanna Moody</a>, research program manager of the MIT Energy Initiative’s <a href=”http://energy.mit.edu/msc/”>Mobility Systems Center</a>, and <a href=”http://energy.mit.edu/profile/jinhua-zhao/”>Jinhua Zhao</a>, the Edward H. and Joyce Linde Associate Professor in the Department of Urban Studies and Planning (DUSP) and director of MIT’s <a href=”https://mobility.mit.edu/”>JTL Urban Mobility Lab</a>, has been examining transportation policy and policymaking in China. “It’s often assumed that transportation policy in China is dictated by the national government,” says Zhao. “But we’ve seen that the national government sets targets and then allows individual cities to decide what policies to implement to meet those targets.”</p> <p>Many studies have investigated transportation policymaking in China’s megacities like Beijing and Shanghai, but few have focused on the hundreds of small- and medium-sized cities located throughout the country. So Moody, Zhao, and their team wanted to consider the process in these overlooked cities. In particular, they asked: how do municipal leaders decide what transportation policies to implement, and can they be better enabled to learn from one another’s experiences? The answers to those questions might provide guidance to municipal decision-makers trying to address the different transportation-related challenges faced by their cities.</p> <p>The answers could also help fill a gap in the research literature. The number and diversity of cities across China has made performing a systematic study of urban transportation policy challenging, yet that topic is of increasing importance. In response to local air pollution and traffic congestion, some Chinese cities are now enacting policies to restrict car ownership and use, and those local policies may ultimately determine whether the unprecedented growth in nationwide private vehicle sales will persist in the coming decades.</p> <p><strong>Policy learning</strong></p> <p>Transportation policymakers worldwide benefit from a practice called policy-learning: Decision-makers in one city look to other cities to see what policies have and haven’t been effective. In China, Beijing and Shanghai are usually viewed as trendsetters in innovative transportation policymaking, and municipal leaders in other Chinese cities turn to those megacities as role models.</p> <p>But is that an effective approach for them? After all, their urban settings and transportation challenges are almost certainly quite different. Wouldn’t it be better if they looked to “peer” cities with which they have more in common?</p> <p>Moody, Zhao, and their DUSP colleagues — postdoc Shenhao Wang and graduate students Jungwoo Chun and Xuenan Ni, all in the JTL Urban Mobility Lab — hypothesized an alternative framework for policy-learning in which cities that share common urbanization and motorization histories would share their policy knowledge. Similar development of city spaces and travel patterns could lead to the same transportation challenges, and therefore to similar needs for transportation policies.</p> <p>To test their hypothesis, the researchers needed to address two questions. To start, they needed to know whether Chinese cities have a limited number of common urbanization and motorization histories. If they grouped the 287 cities in China based on those histories, would they end up with a moderately small number of meaningful groups of peer cities? And second, would the cities in each group have similar transportation policies and priorities?</p> <p><strong>Grouping the cities</strong></p> <p>Cities in China are often grouped into three “tiers” based on political administration, or the types of jurisdictional roles the cities play. Tier 1 includes Beijing, Shanghai, and two other cities that have the same political powers as provinces. Tier 2 includes about 20 provincial capitals. The remaining cities — some 260 of them — all fall into Tier 3. These groupings are not necessarily relevant to the cities’ local urban and transportation conditions.</p> <p>Moody, Zhao, and their colleagues instead wanted to sort the 287 cities based on their urbanization and motorization histories. Fortunately, they had relatively easy access to the data they needed. Every year, the Chinese government requires each city to report well-defined statistics on a variety of measures and to make them public.</p> <p>Among those measures, the researchers chose four indicators of urbanization — gross domestic product per capita, total urban population, urban population density, and road area per capita — and four indicators of motorization — the number of automobiles, taxis, buses, and subway lines per capita. They compiled those data from 2001 to 2014 for each of the 287 cities.</p> <p>The next step was to sort the cities into groups based on those historical datasets — a task they accomplished using a clustering algorithm. For the algorithm to work well, they needed to select parameters that would summarize trends in the time series data for each indicator in each city. They found that they could summarize the 14-year change in each indicator using the mean value and two additional variables: the slope of change over time and the rate at which the slope changes (the acceleration).</p> <p>Based on those data, the clustering algorithm examined different possible numbers of groupings, and four gave the best outcome in terms of the cities’ urbanization and motorization histories. “With four groups, the cities were most similar within each cluster and most different across the clusters,” says Moody. “Adding more groups gave no additional benefit.”</p> <p>The four groups of similar cities are as follows.</p> <p>Cluster 1: 23 large, dense, wealthy megacities that have urban rail systems and high overall mobility levels over all modes, including buses, taxis, and private cars. This cluster encompasses most of the government’s Tier 1 and Tier 2 cities, while the Tier 3 cities are distributed among Clusters 2, 3, and 4.</p> <p>Cluster 2: 41 wealthy cities that don’t have urban rail and therefore are more sprawling, have lower population density, and have auto-oriented travel patterns.</p> <p>Cluster 3: 134 medium-wealth cities that have a low-density urban form and moderate mobility fairly spread across different modes, with limited but emerging car use.</p> <p>Cluster 4:<strong> </strong>89 low-income cities that have generally lower levels of mobility, with some public transit buses but not many roads. Because people usually walk, these cities are concentrated in terms of density and development.</p> <p><strong>City clusters and policy priorities</strong></p> <p>The researchers’ next task was to determine whether the cities within a given cluster have transportation policy priorities that are similar to each other — and also different from those of cities in the other clusters. With no quantitative data to analyze, the researchers needed to look for such patterns using a different approach.</p> <p>First, they selected 44 cities at random (with the stipulation that at least 10 percent of the cities in each cluster had to be represented). They then downloaded the 2017 mayoral report from each of the 44 cities.</p> <p>Those reports highlight the main policy initiatives and directions of the city in the past year, so they include all types of policymaking. To identify the transportation-oriented sections of the reports, the researchers performed keyword searches on terms such as transportation, road, car, bus, and public transit. They extracted any sections highlighting transportation initiatives and manually labeled each of the text segments with one of 21 policy types. They then created a spreadsheet organizing the cities into the four clusters. Finally, they examined the outcome to see whether there were clear patterns within and across clusters in terms of the types of policies they prioritize.</p> <p>“We found strikingly clear patterns in the types of transportation policies adopted within city clusters and clear differences across clusters,” says Moody. “That reinforced our hypothesis that different motorization and urbanization trajectories would be reflected in very different policy priorities.”</p> <p>Here are some highlights of the policy priorities within the clusters.</p> <p>The cities in Cluster 1 have urban rail systems and are starting to consider policies around them. For example, how can they better connect their rail systems with other transportation modes — for instance, by taking steps to integrate them with buses or with walking infrastructure? How can they plan their land use and urban development to be more transit-oriented, such as by providing mixed-use development around the existing rail network?</p> <p>Cluster 2 cities are building urban rail systems, but they’re generally not yet thinking about other policies that can come with rail development. They could learn from Cluster 1 cities about other factors to take into account at the outset. For example, they could develop their urban rail with issues of multi-modality and of transit-oriented development in mind.</p> <p>In Cluster 3 cities, policies tend to emphasize electrifying buses and providing improved and expanded bus service. In these cities with no rail networks, the focus is on making buses work better.</p> <p>Cluster 4 cities are still focused on road development, even within their urban areas. Policy priorities often emphasize connecting the urban core to rural areas and to adjacent cities — steps that will give their populations access to the region as a whole, expanding the opportunities available to them.</p> <p><strong>Benefits of a “mixed method” approach</strong></p> <p>Results of the researchers’ analysis thus support their initial hypothesis. “Different urbanization and motorization trends that we captured in the clustering analysis are reflective of very different transportation priorities,” says Moody. “That match means we can use this approach for further policymaking analysis.”</p> <p>At the outset, she viewed their study as a “proof of concept” for performing transportation policy studies using a mixed-method approach. Mixed-method research involves a blending of quantitative and qualitative approaches. In their case, the former was the mathematical analysis of time series data, and the latter was the in-depth review of city government reports to identify transportation policy priorities. “Mixed-method research is a growing area of interest, and it’s a powerful and valuable tool,” says Moody.</p> <p>She did, however, find the experience of combining the quantitative and qualitative work challenging. “There weren’t many examples of people doing something similar, and that meant that we had to make sure that our quantitative work was defensible, that our qualitative work was defensible, and that the combination of them was defensible and meaningful,” she says.</p> <p>The results of their work confirm that their novel analytical framework could be used in other large, rapidly developing countries with heterogeneous urban areas. “It’s probable that if you were to do this type of analysis for cities in, say, India, you might get a different number of city types, and those city types could be very different from what we got in China,” says Moody. Regardless of the setting, the capabilities provided by this kind of mixed method framework should prove increasingly important as more and more cities around the world begin innovating and learning from one another how to shape sustainable urban transportation systems.</p> <p>This research was supported by the MIT Energy Initiative’s Mobility of the Future study. Information about the study, its participants and supporters, and its publications is available at <a href=”http://energy.mit.edu/research/mobilityofthefuture/”>energy.mit.edu/research/mobilityofthefuture</a>.</p> Using a novel methodology, MITEI researcher Joanna Moody and Associate Professor Jinhua Zhao uncovered patterns in the development trends and transportation policies of China’s 287 cities — including Fengcheng, shown here — that may help decision-makers learn from one another. Photo: blake.thornberry/Flickr https://news.mit.edu/2020/quest-practical-fusion-energy-sources-erica-salazar-0521 Graduate student Erica Salazar tackles a magnetic engineering challenge. Thu, 21 May 2020 14:35:01 -0400 https://news.mit.edu/2020/quest-practical-fusion-energy-sources-erica-salazar-0521 Peter Dunn | Department of Nuclear Science and Engineering <p>The promise of fusion energy has grown substantially in recent years, in large part because of novel high-temperature superconducting (HTS) materials that can shrink the size and boost the performance of the extremely powerful magnets needed in fusion reactors. Realizing that potential is a complex engineering challenge, which nuclear science and engineering student&nbsp;<a href=”http://www.psfc.mit.edu/people/graduate-students/erica-salazar”>Erica Salazar</a>&nbsp;is taking up in her doctoral studies.</p> <p>Salazar works at MIT’s Plasma Science and Fusion Center (PSFC) on the&nbsp;<a href=”http://www.psfc.mit.edu/sparc”>SPARC</a>&nbsp;project, an ambitious fast-track program being conducted in collaboration with MIT spinout Commonwealth Fusion Systems (CFS). The goal is development of a fusion energy experiment to demonstrate net energy gain at unprecedentedly small size and to validate the new magnet technology in a high-field fusion device. Success would be a major accomplishment in the effort to make safe, carbon-free fusion power ready for the world’s electrical grid by the 2030s, as part of the broader push to control climate change.</p> <p>A fundamental challenge is that fusion of nuclei takes place only at extreme temperatures, like those found in the cores of stars. No physical vessel can contain such conditions, so one approach to harnessing fusion involves creating a “bottle” of magnetic fields within a reactor chamber. To succeed, this magnetic-confinement approach must be capable of containing and controlling a super-heated plasma for extended periods, and that in turn requires steady, stable, predictable operation from the magnets involved, even as they deliver unprecedented levels of performance.</p> <p>In pursuit of that goal, Salazar is drawing on knowledge gained during a five-year stint at General Atomics, where she worked on magnet manufacturing for the ITER international fusion reactor project. It, like SPARC, uses a magnetic-confinement approach, and Salazar commissioned and managed the reaction heat treatment process for ITER’s 120-ton superconducting modules and helped design and operate a cryogenic full-current test station.</p> <p>“That experience is very helpful,” she notes. “Even though the ITER magnets utilize low-temperature superconductors and SPARC is using HTS, there are a lot of similarities in manufacturing, and it gives a sense of which questions to ask. It’s a situation where you know enough to understand what you don’t know, and that’s really exciting. It definitely gives me motivation to work hard, go deep, and expand my efforts.”</p> <p>A central focus of Salazar’s work is a phenomenon called quench. It’s a common abnormality that occurs when part of a magnet’s coil shifts out of a superconducting state, where it has almost no electrical resistance, and into a normal resistive state. The resistance causes the massive current flowing through the coil, and the energy stored in the magnet, to quickly convert to heat in the affected region. That can result in the entire magnet dropping out of its superconducting state and also cause significant physical damage.</p> <p>Many factors can cause quench, and it is seen as unavoidable, so real-time management is essential in a practical fusion reactor. “My PhD thesis work is on understanding quench dynamics, especially in new HTS magnet designs,” explains Salazar, who is advised by Department of Nuclear Science and Engineering Professor&nbsp;<a href=”http://web.mit.edu/nse/people/faculty/hartwig.html”>Zach Hartwig</a>&nbsp;and started engaging with the CFS team before the company’s 2018 formation. “Those new materials are so good, and they have more temperature margin, but that makes it harder to detect when there’s a localized loss of superconductivity — so it’s a good position for me as a grad student.</p> <p>“I hope to answer questions like, what does the quench look like? How does it propagate, and how fast?&nbsp;How large of a disturbance will cause a thermal runaway?&nbsp;With more knowledge of what a quench looks like, I can then use that information to help design novel quench-detection systems.”</p> <p>Addressing this type of issue is part of the SPARC program’s strategic transition away from “big plasma physics problems,” says Salazar, and toward a greater focus on the engineering challenges involved in practical implementation. While there is more to be learned from a scientific perspective, a broad consensus has emerged in the U.S. fusion community that construction of a pilot fusion power plant should be a&nbsp;<a href=”https://news.mit.edu/2020/fusion-researchers-endorse-push-pilot-power-plant-us-0318″>national priority</a>.</p> <p>To this end, the SPARC program takes a systemic approach to ensure broad coordination. As Salazar notes, “to devise an effective detection system, you need to be aware of the implications within the overall systems engineering approach of the project. I really like the way the project teams are designed to be fluid. Everyone knows who’s working on what, and you can sit in on meetings if you want to. We all have a limited amount of time, but the resources are there.”</p> <p>Salazar has helped the process by starting a popular email list that bridges the CFS and MIT social worlds, linking people who would not otherwise be connected and creating opportunities for off-hours activities together. “Working is easy; sometimes the hard part is making sure you have time for personal stuff,” she observes.</p> <p>She’s also active in developing and encouraging a more-inclusive MIT community culture, via involvement with a women’s group at PSFC and the launch of an Institute-wide organization, Hermanas Unidas, for Latina-identifying women students, staff, faculty, and postdocs.</p> <p>“It’s important to find a community with others that share or value similar cultural backgrounds. But it’s also important to see how those with similar backgrounds have done amazing things professionally or academically. Hermanas Unidas is a great community of people from all walks of life at MIT who provide mutual support and encouragement as we navigate our careers at MIT and beyond,” explains Salazar.</p> <p>“It’s wonderful to learn from other Latina faculty and staff at MIT — about the hardships they faced when they were in my position as a student or how, as staff members, they work to support students and connect us with other great initiatives. On the flip side, I can share with undergraduates my work experience and my decision to go to graduate school.”</p> <p>Looking ahead, Salazar is encouraged by the growing momentum toward fusion energy. “I had the opportunity to go to the Congressional Fusion Day event in 2016, talk to House and Senate representatives about what fusion does for the economy and technologies, and meet researchers from outside of the ITER program,” she recalls. “I hadn’t realized how big and expansive the fusion community is, and it was interesting to hear how much was going on, and exciting to know that there’s private-sector interest in investing in fusion.”</p> <p>And because fusion energy has such game-changing potential for the world’s electrical grid, says Salazar, “it’s cool to talk to people about it and present it in a way that shows how it will impact them. Throughout my life, I’ve always enjoyed going deep and expending my efforts, and this is such a great area for that. There’s always something new, it’s very interdisciplinary, and it benefits society.”</p> “I really like the way the project teams are designed to be fluid. Everyone knows who’s working on what, and you can sit in on meetings if you want to. We all have a limited amount of time, but the resources are there,”. says Erica Salazar. Photo: Eric Younge https://news.mit.edu/2020/towable-sensor-vertical-ocean-conditions-0520 Instrument may help scientists assess the ocean’s response to climate change. Wed, 20 May 2020 11:26:59 -0400 https://news.mit.edu/2020/towable-sensor-vertical-ocean-conditions-0520 Jennifer Chu | MIT News Office <p>The motion of the ocean is often thought of in horizontal terms, for instance in the powerful currents that sweep around the planet, or the waves that ride in and out along a coastline. But there is also plenty of vertical motion, particularly in the open seas, where water from the deep can rise up, bringing nutrients to the upper ocean, while surface waters sink, sending dead organisms, along with oxygen and carbon, to the deep interior.</p><p>Oceanographers use instruments to characterize the vertical mixing of the ocean’s waters and the biological communities that live there. But these tools are limited in their ability to capture small-scale features, such as the up- and down-welling of water and organisms over a small, kilometer-wide ocean region. Such features are essential for understanding the makeup of marine life that exists in a given volume of the ocean (such as in a fishery), as well as the amount of carbon that the ocean can absorb and sequester away.</p><p>Now researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) have engineered a lightweight instrument that measures both physical and biological features of the vertical ocean over small, kilometer-wide patches. The “ocean profiler,” named EcoCTD, is about the size of a waist-high model rocket and can be dropped off the back of a moving ship. As it free-falls through the water, its sensors measure physical features, such as temperature and salinity, as well as biological properties, such as the optical scattering of chlorophyll, the green pigment of phytoplankton.</p><p>“With EcoCTD, we can see small-scale areas of fast vertical motion, where nutrients could be supplied to the surface, and where chlorophyll is carried downward, which tells you this could also be a carbon pathway. That’s something you would otherwise miss with existing technology,” says Mara Freilich, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences and the MIT-WHOI Joint Program in Oceanography/Applied Ocean Sciences and Engineering.</p><p>Freilich and her colleagues have published their results today in the <em>Journal of Atmospheric and Oceanic Technology</em>. The paper’s co-authors are J. Thomas Farrar, Benjamin Hodges, Tom Lanagan, and Amala Mahadevan of WHOI, and Andrew Baron of Dynamic System Analysis, in Nova Scotia. The lead author is Mathieu Dever of WHOI and RBR, a developer of ocean sensors based in Ottawa.</p><p><strong>Ocean synergy</strong></p><p>Oceanographers use a number of methods to measure the physical properties of the ocean. Some of the more powerful, high-resolution instruments used are known as CTDs, for their ability to measure the ocean’s conductivity, temperature, and depth. CTDs are typically bulky, as they contain multiple sensors as well as components that collect water and biological samples. Conventional CTDs require a ship to stop as scientists lower the instrument into the water, sometimes via a crane system. The ship has to stay put as the instrument collects measurements and water samples, and can only get back underway after the instrument is hauled back onboard.</p><p>Physical oceanographers who do not study ocean biology, and therefore do not need to collect water samples, can sometimes use “UCTDs” — underway versions of CTDs, without the bulky water sampling components, that can be towed as a ship is underway. These instruments can sample quickly since they do not require a crane or a ship to stop as they are dropped.</p><p>Freilich and her team looked to design a version of a UCTD that could also incorporate biological sensors, all in a small, lightweight, towable package, that would also keep the ship moving on course as it gathered its vertical measurements.</p><p>“It seemed there could be straightforward synergy between these existing instruments, to design an instrument that captures physical and biological information, and could do this underway as well,” Freilich says.</p><p><strong>“Reaching the dark ocean”</strong></p><p>The core of the EcoCTD is the RBR Concerto Logger, a sensor that measures the temperature of the water, as well as the conductivity, which is a proxy for the ocean’s salinity. The profiler also includes a lead collar that provides enough weight to enable the instrument to free-fall through the water at about 3 meters per second — a rate that takes the instrument down to about 500 meters below the surface in about two minutes.</p><p>“At 500 meters, we’re reaching the upper twilight zone,” Freilich says. “The euphotic zone is where there’s enough light in the ocean for photosynthesis, and that’s at about 100 to 200 meters in most places. So we’re reaching the dark ocean.”</p><p>Another sensor, the EcoPuck, is unique to other UCTDs in that it measures the ocean’s biological properties. Specifically, it is a small, puck-shaped bio-optical sensor that emits two wavelengths of light — red and blue. The sensor captures any change in these lights as they scatter back and as chlorophyll-containing phytoplankton fluoresce in response to the light. If the red light received resembles a certain wavelength characteristic of chlorophyll, scientists can deduce the presence of phytoplankton at a given depth. Variations in red and blue light scattered back to the sensor can indicate other matter in the water, such as sediments or dead cells — a measure of the amount of carbon at various depths.</p><p>The EcoCTD includes another sensor unique to UCTDs — the Rinko III Do, which measures the oxygen concentration in water, which can give scientists an estimate of how much oxygen is being taken up by any microbial communities living at a given depth and parcel of water.</p><p>Finally, the entire instrument is encased in a tube of aluminum and designed to attach via a long line to a winch at the back of a ship. As the ship is moving, a team can drop the instrument overboard and use the winch to pay the line out at a rate that the instrument drops straight down, even as the ship moves away. After about two minutes, once it has reached a depth of about 500 meters, the team cranks the winch to pull the instrument back up, at a rate that the &nbsp;instrument catches up to the ship within 12 minutes. The crew can then drop the instrument again, this time at some distance from their last dropoff point.</p><p>“The nice thing is, by the time we go to the next cast, we’re 500 meters away from where we were the first time, so we’re exactly where we want to sample next,” Freilich says.</p><p>They tested the EcoCTD on two cruises in 2018 and 2019, one to the Mediterranean and the other in the Atlantic, and in both cases were able to collect both physical and biological data at a higher resolution than existing CTDs.</p><p>“The ecoCTD is capturing these ocean characteristics at a gold-standard quality with much more convenience and versatility,” Freilich says.</p><p>The team will further refine their design, and hopes that their high-resolution, easily-deployable, and more efficient alternative may be adapted by both scientists to monitor the ocean’s small-scale responses to climate change, as well as fisheries that want to keep track of a certain region’s biological productivity. &nbsp;</p><p>This research was funded in part by the U.S. Office of Naval Research.</p> Scientists prepare to deploy an underway CTD from the back deck of a research vessel. Image: Amala Mahadevan https://news.mit.edu/2020/mit-scientist-turns-to-entrepreneurship-pablo-ducru-0520 After delivering novel computational methods for nuclear problems, nuclear science and engineering PhD candidate Pablo Ducru plunges into startup life. Wed, 20 May 2020 00:00:01 -0400 https://news.mit.edu/2020/mit-scientist-turns-to-entrepreneurship-pablo-ducru-0520 Leda Zimmerman | Department of Nuclear Science and Engineering <p>Like the atomic particles he studies, Pablo Ducru seems constantly on the move, vibrating with energy. But if he sometimes appears to be headed in an unexpected direction, Ducru, a doctoral candidate in nuclear science and computational engineering, knows exactly where he is going: “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research, says Ducru, or pursuing a zero-carbon future as an entrepreneur.</p> <p>It can be hard catching up with Ducru. In January, he returned to Cambridge, Massachusetts, from Beijing, where he was spending a year earning a master’s degree in global affairs as a <a href=”https://www.schwarzmanscholars.org/”>Schwarzman Scholar</a> at Tsinghua University. He flew out just days before a travel crackdown in response to Covid-19.</p> <p>“This year has been intense, juggling my PhD work and the master’s overseas,” he says. “But I needed to do it, to get a 360-degree understanding of the problem of climate change, which isn’t just a technological problem, but also one involving economics, trade, policy, and finance.”</p> <p>Schwarzman Scholars, an international cohort selected on the basis of academic excellence and leadership potential, among other criteria, focus on critical challenges of the 21st century. While all the students must learn the basics of international relations and China’s role in the world economy, they can tailor their studies according to their interests.</p> <p>Ducru is incorporating nuclear science into his master’s program. “It is at the core of many of the world’s key problems, from climate change to arms controls, and it also impacts artificial intelligence by advancing high-performance computing,” he says.</p> <p>A Franco-Mexican raised in Paris, Ducru arrived at nuclear science by way of France’s selective academic system. He excelled in math, history, and English during his high school years. “I realized technology is what drives history,” he says. “I thought that if I wanted to make history, I needed to make technology.” He graduated from Ecole Polytechnique specializing in physics and applied mathematics, and with a major in energies of the 21st century.</p> <p><strong>Creating computational shortcuts</strong></p> <p>Today, as a member of MIT’s Computational Reactor Physics Group (CRPG), Ducru is deploying his expertise in singular ways to help solve some of the toughest problems in nuclear science.</p> <p>Nuclear engineers, hoping to optimize efficiency and safety in current and next-generation reactor designs, are on a quest for high-fidelity nuclear simulations. At such fine-grained levels of modeling, the behavior of subatomic particles is sensitive to minute uncertainties in temperature change, or differences in reactor core geometry, for instance. To quantify such uncertainties, researchers currently need countless costly hours of supercomputer time to simulate the behaviors of billions of neutrons under varying conditions, estimating and then averaging outcomes.</p> <p>“But with some problems, more computing won’t make a difference,” notes Ducru. “We have to help computers do the work in smarter ways.” To accomplish this task, he has developed new formulations for characterizing basic nuclear physics that make it much easier for a computer to solve problems: “I dig into the fundamental properties of physics to give nuclear engineers new mathematical algorithms that outperform thousands of times over the old ways of computing.”</p> <p>With his novel statistical methods and algorithms, developed with CRPG colleagues and during summer stints at Los Alamos and Oak Ridge National Laboratories, Ducru offers “new ways of looking at problems that allow us to infer trends from uncertain inputs, such as physics, geometries, or temperatures,” he says. &nbsp;</p> <p>These innovative tools accommodate other kinds of problems that involve computing average behaviors from billions of individual occurrences, such as bubbles forming in a turbulent flow of reactor coolant. “My solutions are quite fundamental and problem-agnostic — applicable to the design of new reactors, to nuclear imaging systems for tumor detection, or to the plutonium battery of a Mars rover,” he says. “They will be useful anywhere scientists need to lower costs of high-fidelity nuclear simulations.”</p> <p>But Ducru won’t be among the scientists deploying these computational advances. “I think we’ve done a good job, and others will continue in this area of research,” he says. “After six years of delving deep into quantum physics and statistics, I felt my next step should be a startup.”</p> <p><strong>Scaling up with shrimp</strong></p> <p>As he pivots away from academia and nuclear science, Ducru remains constant to his mission of addressing the climate problem. The result is Torana, a company Ducru and a partner started in 2018 to develop the financial products and services aquaculture needs to sustainably feed the world.</p> <p>“I thought we could develop a scalable zero-carbon food,” he says. “The world needs high-nutrition proteins to feed growing populations in a climate-friendly way, especially in developing nations.”&nbsp;</p> <p>Land-based protein sources such as livestock can take a heavy toll on the environment. But shrimp, on the other hand, are “very efficient machines, scavenging crud at the bottom of the ocean and converting it into high-quality protein,” notes Ducru, who received the 2018 <a href=”http://www.mitwaterinnovation.org/”>MIT Water Innovation Prize</a> and the 2019 <a href=”http://food-ag.squarespace.com/innovation-prize”>Rabobank-MIT Food and Agribusiness Prize</a>, and support from <a href=”http://sandbox.mit.edu/” target=”_blank”>MIT Sandbox</a> to help develop his aquaculture startup (then called Velaron).</p> <p>Torana is still in early stages, and Ducru hopes to apply his modeling expertise to build a global system of sustainable shrimp farming. His Schwarzman master thesis studies the role of aquaculture in our future global food system, with a focus on the shrimp supply chain.</p> <p>In response to the Covid-19 pandemic, Ducru relocated to the family farm in southern France, which he helps run while continuing to follow the Tsinghua masters online and work on his MIT PhD. He is tweaking his business plans, and putting the final touches on his PhD research, including submitting several articles for publication. While it’s been challenging keeping all these balls in the air, he has supportive mentors — “<a href=”http://web.mit.edu/nse/people/faculty/forget.html”>Benoit Forget</a> [CRPG director] has backed almost all my crazy ideas,” says Ducru. “People like him make MIT the best university on Earth.”</p> <p>Ducru is already mapping out his next decade or so: grow his startup, and perhaps create a green fund that could underwrite zero-carbon projects, including nuclear ones. “I don’t have Facebook and don’t watch online series or TV, because I prefer being an actor, creating things through my work,” he says. “I’m a scientific entrepreneur, and will continue to innovate across different realms.”</p> “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research or pursuing a zero-carbon future as an entrepreneur, says MIT PhD candidate Pablo Ducru. Photo: Gretchen Ertl https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Abigail Ostriker ’16 and Addison Stark SM ’10, PhD ’15 share how their experiences with MIT’s energy programs connect them to the global energy community. Mon, 18 May 2020 14:20:01 -0400 https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Turner Jackson | MIT Energy Initiative <p><em>Students who engage in energy studies at MIT develop an integrative understanding of energy as well as skills required of tomorrow’s energy professionals, leaders, and innovators in research, industry, policy, management, and governance. Two energy alumni recently shared their experiences as part of MIT’s energy community, and how their work connects to energy today.</em></p> <p><em>Abigail Ostriker ’16, who majored in applied mathematics, is now pursuing a PhD in economics at MIT, where she is conducting research into whether subsidized flood insurance causes overdevelopment. Prior to her graduate studies, she conducted two years of research into health economics with Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT. Addison Stark SM ’10, PhD ’15, whose degrees are in mechanical engineering and technology and policy, is the associate director for energy innovation at the Bipartisan Policy Center in Washington, which focuses on implementing effective policy on important topics for American citizens. He also serves as an adjunct professor at Georgetown University, where he teaches a course on clean energy innovation. Prior to these roles, he was a fellow and acting program director at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy.</em></p> <p><strong>Q: </strong>What experiences did you have that inspired you to pursue energy studies?</p> <p><strong>Stark:</strong> I grew up on a farm in rural Iowa, surrounded by a growing biofuels industry and bearing witness to the potential impacts of climate change on agriculture. I then went to the University of Iowa as an undergrad. While there, I was lucky enough to serve as one of the student representatives on a committee that put together a large decarbonization plan for the university. I recognized at the time that the university not only needed to put together a policy, but also to think about what technologies they had to procure to implement their goals. That experience increased my awareness of the big challenges surrounding climate change. I was fortunate to have attended the University of Iowa because a large percentage of the students had an environmental outlook, and many faculty members were involved with the Intergovernmental Panel on Climate Change (IPCC) and engaged with climate and sustainability issues at a time when many other science and engineering schools hadn’t to the same degree.</p> <p><strong>Q:</strong> How did your time at MIT inform your eventual work in the energy space?</p> <p><strong>Ostriker:</strong> I took my first economics class in my freshman fall, but I didn’t really understand what economics could do until I took Energy Economics and Policy [14.44J/15.037] with Professor Christopher Knittel at the Sloan School the following year. That class turned the field from a collection of unrealistic maximizing equations into a framework that could make sense of real people’s decisions and predict how incentives affect outcomes. That experience led me to take a class on econometrics. The combination made me feel like economics was a powerful set of tools for understanding the world — and maybe tweaking it to get a slightly better outcome.</p> <p><strong>Stark:</strong> Completing my master’s in the Technology and Policy Program (TPP) and in mechanical engineering at MIT was invaluable. The focus on systems thinking that was being employed in TPP and at the MIT Energy Initiative (MITEI) has been very important in shaping my thinking around the biggest challenges in climate and energy.</p> <p>While pursuing my master’s degree, I worked with Daniel Cohn, a research scientist at MITEI, and Ahmed Ghoniem, a professor of mechanical engineering, who later became my PhD advisor. We looked at a lot of big questions about how to integrate advanced biofuels into today’s transportation and distribution infrastructures: Can you ship it in a pipeline? Can you transport it? Are people able to put it into infrastructure that we’ve already spent billions of dollars building out? One of the critical lessons that I learned while at MITEI — and it’s led to a lot of my thinking today — is that in order for us to have an effective energy transition, there need to be ways that we can utilize current infrastructure.</p> <p>Being involved with and becoming a co-president of the MIT Energy Club in 2010 truly helped to shape my experience at MIT. When I came to MIT, one of the first things that I did was attend the MIT Energy Conference. In the early days of the club and of MITEI — in ’07 — there was a certain “energy” around energy at MIT that really got a lot of us thinking about careers in the field.</p> <p><strong>Q:</strong> How does your current research connect to energy, and in what ways do the fields of economics and energy connect?</p> <p><strong>Ostriker: </strong>Along with my classmate Anna Russo, I am currently studying whether subsidized flood insurance causes over-development. In the U.S., many flood maps are out of date and backward-looking: Flood risk is rising due to climate change, so in many locations, insurance premiums now cost less than expected damages. This creates an implicit subsidy for risky areas that distorts price signals and may cause a high number of homes to be built. We want to estimate the size of the subsidies and the effect they have on development. It’s a challenging question because it’s hard to find a way to compare areas that seem exactly the same except for their insurance premiums. We are hoping to get there by looking at boundaries in the flood insurance maps — areas where true flood risk is the same but premiums are different. We hope that by improving our understanding of how insurance prices affect land use, we can help governments to create more efficient policies for climate resilience.</p> <p>Many economists are studying issues related to both energy and the environment. One definition of economics is the study of trade-offs — how to best allocate scarce resources. In energy, there are questions such as: How should we design electricity markets so that they automatically meet demand with the lowest-cost mix of generation? As the generation mix moves from almost all fossil fuels to a higher penetration of renewables, will that market design still work, or will it need to be adapted so that renewable energy companies still find it attractive to participate?</p> <p>In addition to theoretical questions about how markets work, economists also study the way real people or companies respond to policies. For example, if retail electricity prices started to change by the hour or by the minute, how would people’s energy use respond to that? To answer this question convincingly, you need to find a situation in which everything is almost identical between two groups, except that one group faces different prices. You can’t always do a randomized experiment, so you must find something almost like an experiment in the real world. This kind of toolkit is also used a lot in environmental economics. For instance, we might study the effect of pollution on students’ test scores. In that setting, economists’ tools of causal inference make it possible to move beyond an observed correlation to a statement that pollution had a causal effect.</p> <p><strong>Q: </strong>How do you think we can make the shift toward a clean energy-based economy a more pressing issue for people across the political spectrum?</p> <p><strong>Stark: </strong>If we are serious about addressing climate change as a country, we need to recognize that any policy has to be bipartisan; it will need to hit 60 votes in the Senate. Very quickly — within the next few years — we need to develop a set of robust bipartisan policies that can move us toward decarbonization by mid-century. If the IPCC recommendations are to be followed, our ultimate goal is to hit net-zero carbon emissions by 2050. What that means to me is that we need to frame up all of the benefits of a large clean energy program to address climate change. When we address climate change, one of the valuable things that’s going to happen is major investment in technology deployment and development, which involves creating jobs — which is a bipartisan issue.</p> <p>As we are looking to build out a decarbonized future, one thing that needs to happen is reinvesting in our national infrastructure, which is an issue that is recognized in a bipartisan sense. It’s going to require more nuance than just the pure Green New Deal approach. In order to get Republicans on board, we need to realize that investment can’t be based only on renewables. There are a lot of people whose economies depend on the continued and smart use of fossil resources. We have to think about how we develop and deploy carbon capture technologies, as these technologies are going to be integral in garnering more support from rural and conservative communities for the energy transition.</p> <p>The Republican Party is embracing the role of nuclear energy more than some Democrats are. The key thing is that today, nuclear is far and away the most prevalent source of zero-carbon electricity that we have. So, expanding nuclear power is a critically important piece of decarbonizing energy, and Republicans have identified that as a place where they would like to invest along with carbon capture, utilization, and storage — another technology with less enthusiasm on the environmental left. Finding ways to bridge party lines on these critical technologies is one of the biggest pieces that I think will be important in bringing about a low-carbon future.</p> Addison Stark (left) and Abigail Ostriker Stark photo: Greg Gibson/Bipartisan Policy Center; Ostriker photo: Thomas Dattilo https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Abigail Ostriker ’16 and Addison Stark SM ’10, PhD ’15 share how their experiences with MIT’s energy programs connect them to the global energy community. Mon, 18 May 2020 14:20:01 -0400 https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Turner Jackson | MIT Energy Initiative <p><em>Students who engage in energy studies at MIT develop an integrative understanding of energy as well as skills required of tomorrow’s energy professionals, leaders, and innovators in research, industry, policy, management, and governance. Two energy alumni recently shared their experiences as part of MIT’s energy community, and how their work connects to energy today.</em></p> <p><em>Abigail Ostriker ’16, who majored in applied mathematics, is now pursuing a PhD in economics at MIT, where she is conducting research into whether subsidized flood insurance causes overdevelopment. Prior to her graduate studies, she conducted two years of research into health economics with Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT. Addison Stark SM ’10, PhD ’15, whose degrees are in mechanical engineering and technology and policy, is the associate director for energy innovation at the Bipartisan Policy Center in Washington, which focuses on implementing effective policy on important topics for American citizens. He also serves as an adjunct professor at Georgetown University, where he teaches a course on clean energy innovation. Prior to these roles, he was a fellow and acting program director at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy.</em></p> <p><strong>Q: </strong>What experiences did you have that inspired you to pursue energy studies?</p> <p><strong>Stark:</strong> I grew up on a farm in rural Iowa, surrounded by a growing biofuels industry and bearing witness to the potential impacts of climate change on agriculture. I then went to the University of Iowa as an undergrad. While there, I was lucky enough to serve as one of the student representatives on a committee that put together a large decarbonization plan for the university. I recognized at the time that the university not only needed to put together a policy, but also to think about what technologies they had to procure to implement their goals. That experience increased my awareness of the big challenges surrounding climate change. I was fortunate to have attended the University of Iowa because a large percentage of the students had an environmental outlook, and many faculty members were involved with the Intergovernmental Panel on Climate Change (IPCC) and engaged with climate and sustainability issues at a time when many other science and engineering schools hadn’t to the same degree.</p> <p><strong>Q:</strong> How did your time at MIT inform your eventual work in the energy space?</p> <p><strong>Ostriker:</strong> I took my first economics class in my freshman fall, but I didn’t really understand what economics could do until I took Energy Economics and Policy [14.44J/15.037] with Professor Christopher Knittel at the Sloan School the following year. That class turned the field from a collection of unrealistic maximizing equations into a framework that could make sense of real people’s decisions and predict how incentives affect outcomes. That experience led me to take a class on econometrics. The combination made me feel like economics was a powerful set of tools for understanding the world — and maybe tweaking it to get a slightly better outcome.</p> <p><strong>Stark:</strong> Completing my master’s in the Technology and Policy Program (TPP) and in mechanical engineering at MIT was invaluable. The focus on systems thinking that was being employed in TPP and at the MIT Energy Initiative (MITEI) has been very important in shaping my thinking around the biggest challenges in climate and energy.</p> <p>While pursuing my master’s degree, I worked with Daniel Cohn, a research scientist at MITEI, and Ahmed Ghoniem, a professor of mechanical engineering, who later became my PhD advisor. We looked at a lot of big questions about how to integrate advanced biofuels into today’s transportation and distribution infrastructures: Can you ship it in a pipeline? Can you transport it? Are people able to put it into infrastructure that we’ve already spent billions of dollars building out? One of the critical lessons that I learned while at MITEI — and it’s led to a lot of my thinking today — is that in order for us to have an effective energy transition, there need to be ways that we can utilize current infrastructure.</p> <p>Being involved with and becoming a co-president of the MIT Energy Club in 2010 truly helped to shape my experience at MIT. When I came to MIT, one of the first things that I did was attend the MIT Energy Conference. In the early days of the club and of MITEI — in ’07 — there was a certain “energy” around energy at MIT that really got a lot of us thinking about careers in the field.</p> <p><strong>Q:</strong> How does your current research connect to energy, and in what ways do the fields of economics and energy connect?</p> <p><strong>Ostriker: </strong>Along with my classmate Anna Russo, I am currently studying whether subsidized flood insurance causes over-development. In the U.S., many flood maps are out of date and backward-looking: Flood risk is rising due to climate change, so in many locations, insurance premiums now cost less than expected damages. This creates an implicit subsidy for risky areas that distorts price signals and may cause a high number of homes to be built. We want to estimate the size of the subsidies and the effect they have on development. It’s a challenging question because it’s hard to find a way to compare areas that seem exactly the same except for their insurance premiums. We are hoping to get there by looking at boundaries in the flood insurance maps — areas where true flood risk is the same but premiums are different. We hope that by improving our understanding of how insurance prices affect land use, we can help governments to create more efficient policies for climate resilience.</p> <p>Many economists are studying issues related to both energy and the environment. One definition of economics is the study of trade-offs — how to best allocate scarce resources. In energy, there are questions such as: How should we design electricity markets so that they automatically meet demand with the lowest-cost mix of generation? As the generation mix moves from almost all fossil fuels to a higher penetration of renewables, will that market design still work, or will it need to be adapted so that renewable energy companies still find it attractive to participate?</p> <p>In addition to theoretical questions about how markets work, economists also study the way real people or companies respond to policies. For example, if retail electricity prices started to change by the hour or by the minute, how would people’s energy use respond to that? To answer this question convincingly, you need to find a situation in which everything is almost identical between two groups, except that one group faces different prices. You can’t always do a randomized experiment, so you must find something almost like an experiment in the real world. This kind of toolkit is also used a lot in environmental economics. For instance, we might study the effect of pollution on students’ test scores. In that setting, economists’ tools of causal inference make it possible to move beyond an observed correlation to a statement that pollution had a causal effect.</p> <p><strong>Q: </strong>How do you think we can make the shift toward a clean energy-based economy a more pressing issue for people across the political spectrum?</p> <p><strong>Stark: </strong>If we are serious about addressing climate change as a country, we need to recognize that any policy has to be bipartisan; it will need to hit 60 votes in the Senate. Very quickly — within the next few years — we need to develop a set of robust bipartisan policies that can move us toward decarbonization by mid-century. If the IPCC recommendations are to be followed, our ultimate goal is to hit net-zero carbon emissions by 2050. What that means to me is that we need to frame up all of the benefits of a large clean energy program to address climate change. When we address climate change, one of the valuable things that’s going to happen is major investment in technology deployment and development, which involves creating jobs — which is a bipartisan issue.</p> <p>As we are looking to build out a decarbonized future, one thing that needs to happen is reinvesting in our national infrastructure, which is an issue that is recognized in a bipartisan sense. It’s going to require more nuance than just the pure Green New Deal approach. In order to get Republicans on board, we need to realize that investment can’t be based only on renewables. There are a lot of people whose economies depend on the continued and smart use of fossil resources. We have to think about how we develop and deploy carbon capture technologies, as these technologies are going to be integral in garnering more support from rural and conservative communities for the energy transition.</p> <p>The Republican Party is embracing the role of nuclear energy more than some Democrats are. The key thing is that today, nuclear is far and away the most prevalent source of zero-carbon electricity that we have. So, expanding nuclear power is a critically important piece of decarbonizing energy, and Republicans have identified that as a place where they would like to invest along with carbon capture, utilization, and storage — another technology with less enthusiasm on the environmental left. Finding ways to bridge party lines on these critical technologies is one of the biggest pieces that I think will be important in bringing about a low-carbon future.</p> Addison Stark (left) and Abigail Ostriker Stark photo: Greg Gibson/Bipartisan Policy Center; Ostriker photo: Thomas Dattilo https://news.mit.edu/2020/melting-glaciers-cool-southern-ocean-0517 Research suggests glacial melting might explain the recent decadal cooling and sea ice expansion across Antarctica's Southern Ocean. Sun, 17 May 2020 00:00:00 -0400 https://news.mit.edu/2020/melting-glaciers-cool-southern-ocean-0517 Fernanda Ferreira | School of Science <p>Tucked away at the very bottom of the globe surrounding Antarctica, the Southern Ocean has never been easy to study. Its challenging conditions have placed it out of reach to all but the most intrepid explorers. For climate modelers, however, the surface waters of the Southern Ocean provide a different kind of challenge: It doesn’t behave the way they predict it would. “It is colder and fresher than the models expected,” says Craig Rye, a postdoc in the group of Cecil and Ida Green Professor of Oceanography John Marshall within MIT’s Department of Earth, Atmospheric and Planetary Sciences (<a href=”https://eapsweb.mit.edu/” target=”_blank”>EAPS</a>).</p> <p>In recent decades, as the world warms, the Southern Ocean’s surface temperature has cooled, allowing the amount of ice that crystallizes on the surface each winter to grow. This is not what climate models anticipated, and a recent study <a href=”https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2019GL086892″>accepted in <em>Geophysical Research Letters</em></a> attempts to disentangle that discrepancy. “This paper is motivated by a disagreement between what should be happening according to simulations and what we observe,” says Rye, the lead author of the paper who is currently working remotely from NASA’s Goddard Institute for Space Studies, or GISS, in New York City.</p> <p>“This is a big conundrum in the climate community,” says Marshall, a co-author on the paper along with Maxwell Kelley, Gary Russell, Gavin A. Schmidt, and Larissa S. Nazarenko of GISS; James Hansen of Columbia University’s Earth Institute; and Yavor Kostov of the University of Exeter. There are 30 or so climate models used to foresee what the world might look like as the climate changes. According to Marshall, models don’t match the recent observations of surface temperature in the Southern Ocean, leaving scientists with a question that Rye, Marshall, and their colleagues intend to answer: how can the Southern Ocean cool when the rest of the Earth is warming?</p> <p>This isn’t the first time Marshall has investigated the Southern Ocean and its climate trends. In 2016, Marshall and Yavor Kostov PhD ’16 <a href=”http://news.mit.edu/2016/southern-ocean-cooling-in-a-warming-world-0624″>published a paper</a> exploring two possible influences driving the observed ocean trends: greenhouse gas emissions, and westerly winds — strengthened by expansion of the Antarctic ozone hole — blowing cold water northward from the continent. Both explained some of the cooling in the Southern Ocean, but not all of it. “We ended that paper saying there must be something else,” says Marshall.</p> <p>That something else could be meltwater released from thawing glaciers. Rye has probed the influence of glacial melt in the Southern Ocean before, looking at its effect on sea surface height during his PhD at the University of Southampton in the UK. “Since then, I’ve been interested in the potential for glacial melt playing a role in Southern Ocean climate trends,” says Rye.</p> <p>The group’s recent paper uses a series of “perturbation” experiments carried out with the GISS global climate model where they abruptly introduce a fixed increase in melt water around Antarctica and then record how the model responds. The researchers then apply the model’s response to a previous climate state to estimate how the climate should react to the observed forcing. The results are then compared to the observational record, to see if a factor is missing. This method is called hindcasting.</p> <p>Marshall likens perturbation experiments to walking into a room and being confronted with an object you don’t recognize. “You might give it a gentle whack to see what it’s made of,” says Marshall. Perturbation experiments, he explains, are like whacking the model with inputs, such as glacial melt, greenhouse gas emissions, and wind, to uncover the relative importance of these factors on observed climate trends.</p> <p>In their hindcasting, they estimate what would have happened to a pre-industrial Southern Ocean (before anthropogenic climate change) if up to 750 gigatons of meltwater were added each year. That quantity of 750 gigatons of meltwater is estimated from observations of both floating ice shelves and the ice sheet that lies over land above sea level. A single gigaton of water is very large — it can fill 400,000 Olympic swimming pools, meaning 750 gigatons of meltwater is equivalent to pouring water from 300 million Olympic swimming pools into the ocean every year.</p> <p>When this increase in glacial melt was added to the model, it led to sea surface cooling, decreases in salinity, and expansion of sea ice coverage that are consistent with observed trends in the Southern Ocean during the last few decades. Their model results suggest that meltwater may account for the majority of previously misunderstood Southern Ocean cooling.</p><p>The model shows that a warming climate may be driving, in a counterintuitive way, more sea ice by increasing the rate of melting of Antarctica’s glaciers. According to Marshall, the paper may solve the disconnect between what was expected and what was observed in the Southern Ocean, and answers the conundrum he and Kostov pointed to in 2016. “The missing process could be glacial melt.”</p> <p>Research like Rye’s and Marshall’s help project the future state of Earth’s climate and guide society’s decisions on how to prepare for that future. By hindcasting the Southern Ocean’s climate trends, they and their colleagues have identified another process, which must be incorporated into climate models. “What we’ve tried to do is ground this model in the historical record,” says Marshall. Now the group can probe the GISS model response with further “what if?” glacial melt scenarios to explore what might be in store for the Southern Ocean.</p> MIT scientists suggest sea ice extent in the Southern Ocean may increase with glacial melting in Antarctica. This image shows a view of the Earth on Sept. 21, 2005 with the full Antarctic region visible. Photo: NASA/Goddard Space Flight Center


Source: Environment - news.mit.edu

MIT News – Energy

MIT News – Food | Water