More stories

  • in

    AI pilot programs look to reduce energy use and emissions on MIT campus

    Smart thermostats have changed the way many people heat and cool their homes by using machine learning to respond to occupancy patterns and preferences, resulting in a lower energy draw. This technology — which can collect and synthesize data — generally focuses on single-dwelling use, but what if this type of artificial intelligence could dynamically manage the heating and cooling of an entire campus? That’s the idea behind a cross-departmental effort working to reduce campus energy use through AI building controls that respond in real-time to internal and external factors. 

    Understanding the challenge

    Heating and cooling can be an energy challenge for campuses like MIT, where existing building management systems (BMS) can’t respond quickly to internal factors like occupancy fluctuations or external factors such as forecast weather or the carbon intensity of the grid. This results in using more energy than needed to heat and cool spaces, often to sub-optimal levels. By engaging AI, researchers have begun to establish a framework to understand and predict optimal temperature set points (the temperature at which a thermostat has been set to maintain) at the individual room level and take into consideration a host of factors, allowing the existing systems to heat and cool more efficiently, all without manual intervention. 

    “It’s not that different from what folks are doing in houses,” explains Les Norford, a professor of architecture at MIT, whose work in energy studies, controls, and ventilation connected him with the effort. “Except we have to think about things like how long a classroom may be used in a day, weather predictions, time needed to heat and cool a room, the effect of the heat from the sun coming in the window, and how the classroom next door might impact all of this.” These factors are at the crux of the research and pilots that Norford and a team are focused on. That team includes Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; Audun Botterud, principal research scientist for the Laboratory for Information and Decision Systems; Steve Lanou, project manager in the MIT Office of Sustainability (MITOS); Fran Selvaggio, Department of Facilities Senior Building Management Systems engineer; and Daisy Green and You Lin, both postdocs.

    The group is organized around the call to action to “explore possibilities to employ artificial intelligence to reduce on-campus energy consumption” outlined in Fast Forward: MIT’s Climate Action Plan for the Decade, but efforts extend back to 2019. “As we work to decarbonize our campus, we’re exploring all avenues,” says Vice President for Campus Services and Stewardship Joe Higgins, who originally pitched the idea to students at the 2019 MIT Energy Hack. “To me, it was a great opportunity to utilize MIT expertise and see how we can apply it to our campus and share what we learn with the building industry.” Research into the concept kicked off at the event and continued with undergraduate and graduate student researchers running differential equations and managing pilots to test the bounds of the idea. Soon, Gregory, who is also a MITOS faculty fellow, joined the project and helped identify other individuals to join the team. “My role as a faculty fellow is to find opportunities to connect the research community at MIT with challenges MIT itself is facing — so this was a perfect fit for that,” Gregory says. 

    Early pilots of the project focused on testing thermostat set points in NW23, home to the Department of Facilities and Office of Campus Planning, but Norford quickly realized that classrooms provide many more variables to test, and the pilot was expanded to Building 66, a mixed-use building that is home to classrooms, offices, and lab spaces. “We shifted our attention to study classrooms in part because of their complexity, but also the sheer scale — there are hundreds of them on campus, so [they offer] more opportunities to gather data and determine parameters of what we are testing,” says Norford. 

    Developing the technology

    The work to develop smarter building controls starts with a physics-based model using differential equations to understand how objects can heat up or cool down, store heat, and how the heat may flow across a building façade. External data like weather, carbon intensity of the power grid, and classroom schedules are also inputs, with the AI responding to these conditions to deliver an optimal thermostat set point each hour — one that provides the best trade-off between the two objectives of thermal comfort of occupants and energy use. That set point then tells the existing BMS how much to heat up or cool down a space. Real-life testing follows, surveying building occupants about their comfort. Botterud, whose research focuses on the interactions between engineering, economics, and policy in electricity markets, works to ensure that the AI algorithms can then translate this learning into energy and carbon emission savings. 

    Currently the pilots are focused on six classrooms within Building 66, with the intent to move onto lab spaces before expanding to the entire building. “The goal here is energy savings, but that’s not something we can fully assess until we complete a whole building,” explains Norford. “We have to work classroom by classroom to gather the data, but are looking at a much bigger picture.” The research team used its data-driven simulations to estimate significant energy savings while maintaining thermal comfort in the six classrooms over two days, but further work is needed to implement the controls and measure savings across an entire year. 

    With significant savings estimated across individual classrooms, the energy savings derived from an entire building could be substantial, and AI can help meet that goal, explains Botterud: “This whole concept of scalability is really at the heart of what we are doing. We’re spending a lot of time in Building 66 to figure out how it works and hoping that these algorithms can be scaled up with much less effort to other rooms and buildings so solutions we are developing can make a big impact at MIT,” he says.

    Part of that big impact involves operational staff, like Selvaggio, who are essential in connecting the research to current operations and putting them into practice across campus. “Much of the BMS team’s work is done in the pilot stage for a project like this,” he says. “We were able to get these AI systems up and running with our existing BMS within a matter of weeks, allowing the pilots to get off the ground quickly.” Selvaggio says in preparation for the completion of the pilots, the BMS team has identified an additional 50 buildings on campus where the technology can easily be installed in the future to start energy savings. The BMS team also collaborates with the building automation company, Schneider Electric, that has implemented the new control algorithms in Building 66 classrooms and is ready to expand to new pilot locations. 

    Expanding impact

    The successful completion of these programs will also open the possibility for even greater energy savings — bringing MIT closer to its decarbonization goals. “Beyond just energy savings, we can eventually turn our campus buildings into a virtual energy network, where thousands of thermostats are aggregated and coordinated to function as a unified virtual entity,” explains Higgins. These types of energy networks can accelerate power sector decarbonization by decreasing the need for carbon-intensive power plants at peak times and allowing for more efficient power grid energy use.

    As pilots continue, they fulfill another call to action in Fast Forward — for campus to be a “test bed for change.” Says Gregory: “This project is a great example of using our campus as a test bed — it brings in cutting-edge research to apply to decarbonizing our own campus. It’s a great project for its specific focus, but also for serving as a model for how to utilize the campus as a living lab.” More

  • in

    The curse of variety in transportation systems

    Cathy Wu has always delighted in systems that run smoothly. In high school, she designed a project to optimize the best route for getting to class on time. Her research interests and career track are evidence of a propensity for organizing and optimizing, coupled with a strong sense of responsibility to contribute to society instilled by her parents at a young age.

    As an undergraduate at MIT, Wu explored domains like agriculture, energy, and education, eventually homing in on transportation. “Transportation touches each of our lives,” she says. “Every day, we experience the inefficiencies and safety issues as well as the environmental harms associated with our transportation systems. I believe we can and should do better.”

    But doing so is complicated. Consider the long-standing issue of traffic systems control. Wu explains that it is not one problem, but more accurately a family of control problems impacted by variables like time of day, weather, and vehicle type — not to mention the types of sensing and communication technologies used to measure roadway information. Every differentiating factor introduces an exponentially larger set of control problems. There are thousands of control-problem variations and hundreds, if not thousands, of studies and papers dedicated to each problem. Wu refers to the sheer number of variations as the curse of variety — and it is hindering innovation.

    Play video

    “To prove that a new control strategy can be safely deployed on our streets can take years. As time lags, we lose opportunities to improve safety and equity while mitigating environmental impacts. Accelerating this process has huge potential,” says Wu.  

    Which is why she and her group in the MIT Laboratory for Information and Decision Systems are devising machine learning-based methods to solve not just a single control problem or a single optimization problem, but families of control and optimization problems at scale. “In our case, we’re examining emerging transportation problems that people have spent decades trying to solve with classical approaches. It seems to me that we need a different approach.”

    Optimizing intersections

    Currently, Wu’s largest research endeavor is called Project Greenwave. There are many sectors that directly contribute to climate change, but transportation is responsible for the largest share of greenhouse gas emissions — 29 percent, of which 81 percent is due to land transportation. And while much of the conversation around mitigating environmental impacts related to mobility is focused on electric vehicles (EVs), electrification has its drawbacks. EV fleet turnover is time-consuming (“on the order of decades,” says Wu), and limited global access to the technology presents a significant barrier to widespread adoption.

    Wu’s research, on the other hand, addresses traffic control problems by leveraging deep reinforcement learning. Specifically, she is looking at traffic intersections — and for good reason. In the United States alone, there are more than 300,000 signalized intersections where vehicles must stop or slow down before re-accelerating. And every re-acceleration burns fossil fuels and contributes to greenhouse gas emissions.

    Highlighting the magnitude of the issue, Wu says, “We have done preliminary analysis indicating that up to 15 percent of land transportation CO2 is wasted through energy spent idling and re-accelerating at intersections.”

    To date, she and her group have modeled 30,000 different intersections across 10 major metropolitan areas in the United States. That is 30,000 different configurations, roadway topologies (e.g., grade of road or elevation), different weather conditions, and variations in travel demand and fuel mix. Each intersection and its corresponding scenarios represents a unique multi-agent control problem.

    Wu and her team are devising techniques that can solve not just one, but a whole family of problems comprised of tens of thousands of scenarios. Put simply, the idea is to coordinate the timing of vehicles so they arrive at intersections when traffic lights are green, thereby eliminating the start, stop, re-accelerate conundrum. Along the way, they are building an ecosystem of tools, datasets, and methods to enable roadway interventions and impact assessments of strategies to significantly reduce carbon-intense urban driving.

    Play video

    Their collaborator on the project is the Utah Department of Transportation, which Wu says has played an essential role, in part by sharing data and practical knowledge that she and her group otherwise would not have been able to access publicly.

    “I appreciate industry and public sector collaborations,” says Wu. “When it comes to important societal problems, one really needs grounding with practitioners. One needs to be able to hear the perspectives in the field. My interactions with practitioners expand my horizons and help ground my research. You never know when you’ll hear the perspective that is the key to the solution, or perhaps the key to understanding the problem.”

    Finding the best routes

    In a similar vein, she and her research group are tackling large coordination problems. For example, vehicle routing. “Every day, delivery trucks route more than a hundred thousand packages for the city of Boston alone,” says Wu. Accomplishing the task requires, among other things, figuring out which trucks to use, which packages to deliver, and the order in which to deliver them as efficiently as possible. If and when the trucks are electrified, they will need to be charged, adding another wrinkle to the process and further complicating route optimization.

    The vehicle routing problem, and therefore the scope of Wu’s work, extends beyond truck routing for package delivery. Ride-hailing cars may need to pick up objects as well as drop them off; and what if delivery is done by bicycle or drone? In partnership with Amazon, for example, Wu and her team addressed routing and path planning for hundreds of robots (up to 800) in their warehouses.

    Every variation requires custom heuristics that are expensive and time-consuming to develop. Again, this is really a family of problems — each one complicated, time-consuming, and currently unsolved by classical techniques — and they are all variations of a central routing problem. The curse of variety meets operations and logistics.

    By combining classical approaches with modern deep-learning methods, Wu is looking for a way to automatically identify heuristics that can effectively solve all of these vehicle routing problems. So far, her approach has proved successful.

    “We’ve contributed hybrid learning approaches that take existing solution methods for small problems and incorporate them into our learning framework to scale and accelerate that existing solver for large problems. And we’re able to do this in a way that can automatically identify heuristics for specialized variations of the vehicle routing problem.” The next step, says Wu, is applying a similar approach to multi-agent robotics problems in automated warehouses.

    Wu and her group are making big strides, in part due to their dedication to use-inspired basic research. Rather than applying known methods or science to a problem, they develop new methods, new science, to address problems. The methods she and her team employ are necessitated by societal problems with practical implications. The inspiration for the approach? None other than Louis Pasteur, who described his research style in a now-famous article titled “Pasteur’s Quadrant.” Anthrax was decimating the sheep population, and Pasteur wanted to better understand why and what could be done about it. The tools of the time could not solve the problem, so he invented a new field, microbiology, not out of curiosity but out of necessity. More

  • in

    The answer may be blowing in the wind

    Capturing energy from the winds gusting off the coasts of the United States could more than double the nation’s electricity generation. It’s no wonder the Biden administration views this immense, clean-energy resource as central to its ambitious climate goals of 100 percent carbon-emissions-free electricity by 2035 and a net-zero emissions economy by 2050. The White House is aiming for 30 gigawatts of offshore wind by 2030 — enough to power 10 million homes.

    At the MIT Energy Initiative’s Spring Symposium, academic experts, energy analysts, wind developers, government officials, and utility representatives explored the immense opportunities and formidable challenges of tapping this titanic resource, both in the United States and elsewhere in the world.

    “There’s a lot of work to do to figure out how to use this resource economically — and sooner rather than later,” said Robert C. Armstrong, MITEI director and the Chevron Professor of Chemical Engineering, in his introduction to the event. 

    In sessions devoted to technology, deployment and integration, policy, and regulation, participants framed the issues critical to the development of offshore wind, described threats to its rapid rollout, and offered potential paths for breaking through gridlock.

    R&D advances

    Moderating a panel on MIT research that is moving the industry forward, Robert Stoner, MITEI’s deputy director for science and technology, provided context for the audience about the industry.

    “We have a high degree of geographic coincidence between where that wind capacity is and where most of us are, and it’s complementary to solar,” he said. Turbines sited in deeper, offshore waters gain the advantage of higher-velocity winds. “You can make these machines huge, creating substantial economies of scale,” said Stoner. An onshore turbine generates approximately 3 megawatts; offshore structures can each produce 15 to 17 megawatts, with blades the length of a football field and heights greater than the Washington Monument.

    To harness the power of wind farms spread over hundreds of nautical miles in deep water, Stoner said, researchers must first address some serious issues, including building and maintaining these massive rigs in harsh environments, laying out wind farms to optimize generation, and creating reliable and socially acceptable connections to the onshore grid. MIT scientists described how they are tackling a number of these problems.

    “When you design a floating structure, you have to prepare for the worst possible conditions,” said Paul Sclavounos, a professor of mechanical engineering and naval architecture who is developing turbines that can withstand severe storms that batter turbine blades and towers with thousands of tons of wind force. Sclavounos described systems used in the oil industry for tethering giant, buoyant rigs to the ocean floor that could be adapted for wind platforms. Relatively inexpensive components such as polyester mooring lines and composite materials “can mitigate the impact of high waves and big, big wind loads.”

    To extract the maximum power from individual turbines, developers must take into account the aerodynamics among turbines in a single wind farm and between adjacent wind farms, according to Michael Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering. Howland’s work modeling turbulence in the atmosphere and wind speeds has demonstrated that angling turbines by just a small amount relative to each other can increase power production significantly for offshore installations, dramatically improving their efficiencies. Howland hopes his research will promote “changing the design of wind farms from the beginning of the process.”

    There’s a staggering complexity to integrating electricity from offshore wind into regional grids such as the one operated by ISO New England, whether converting voltages or monitoring utility load. Steven B. Leeb, a professor of electrical engineering and computer science and of mechanical engineering, is developing sensors that can indicate electronic failures in a micro grid connected to a wind farm. And Marija Ilić, a joint adjunct professor in the Department of Electrical Engineering and Computer Science and a senior research scientist at the Laboratory for Information and Decision Systems, is developing software that would enable real-time scheduling of controllable equipment to compensate for the variable power generated by wind and other variable renewable resources. She is also working on adaptive distributed automation of this equipment to ensure a stable electric power grid.

    “How do we get from here to there?”

    Symposium speakers provided snapshots of the emerging offshore industry, sharing their sense of urgency as well as some frustrations.

    Climate poses “an existential crisis” that calls for “a massive war-footing undertaking,” said Melissa Hoffer, who occupies the newly created cabinet position of climate chief for the Commonwealth of Massachusetts. She views wind power “as the backbone of electric sector decarbonization.” With the Vineyard Wind project, the state will be one of the first in the nation to add offshore wind to the grid. “We are actually going to see the first 400 megawatts … likely interconnected and coming online by the end of this year, which is a fantastic milestone for us,” said Hoffer.

    The journey to completing Vineyard Wind involved a plethora of painstaking environmental reviews, lawsuits over lease siting, negotiations over the price of the electricity it will produce, buy-in from towns where its underground cable comes ashore, and travels to an Eversource substation. It’s a familiar story to Alla Weinstein, founder and CEO of Trident Winds, Inc. On the West Coast, where deep waters (greater than 60 meters) begin closer to shore, Weinstein is trying to launch floating offshore wind projects. “I’ve been in marine renewables for 20 years, and when people ask why I do what I do, I tell them it’s because it matters,” she said. “Because if we don’t do it, we may not have a planet that’s suitable for humans.”

    Weinstein’s “picture of reality” describes a multiyear process during which Trident Winds must address the concerns of such stakeholders as tribal communities and the fishing industry and ensure compliance with, among other regulations, the Marine Mammal Protection Act and the Migratory Bird Species Act. Construction of these massive floating platforms, when it finally happens, will require as-yet unbuilt specialized port infrastructure and boats, and a large skilled labor force for assembly and transmission. “This is a once-in-a-lifetime opportunity to create a new industry,” she said, but “how do we get from here to there?”

    Danielle Jensen, technical manager for Shell’s Offshore Wind Americas, is working on a project off of Rhode Island. The blueprint calls for high-voltage, direct-current cable snaking to landfall in Massachusetts, where direct-current lines switch to alternating current to connect to the grid. “None of this exists, so we have to find a space, the lands, and the right types of cables, tie into the interconnection point, and work with interconnection operators to do that safely and reliably,” she said.

    Utilities are partnering with developers to begin clearing some of these obstacles. Julia Bovey, director of offshore wind for Eversource, described her firm’s redevelopment or improvement of five ports, and new transport vessels for offshore assembly of wind farm components in Atlantic waters. The utility is also digging under roads to lay cables for new power lines. Bovey notes that snags in supply chains and inflation have been driving up costs. This makes determining future electricity rates more complex, especially since utility contracts and markets work differently in each state.

    Just seven up

    Other nations hold a commanding lead in offshore wind: To date, the United States claims just seven operating turbines, while Denmark boasts 6,200 and the U.K. 2,600. Europe’s combined offshore power capacity stands at 30 gigawatts — which, as MITEI Research Scientist Tim Schittekatte notes, is the U.S. goal for 2030.

    The European Union wants 400 gigawatts of offshore wind by 2050, a target made all the more urgent by threats to Europe’s energy security from the war in Ukraine. “The idea is to connect all those windmills, creating a mesh offshore grid,” Schittekatte said, aided by E.U. regulations that establish “a harmonized process to build cross-border infrastructure.”

    Morten Pindstrup, the international chief engineer at Energinet, Denmark’s state-owned energy enterprise, described one component of this pan-European plan: a hybrid Danish-German offshore wind network. Energinet is also constructing energy islands in the North Sea and the Baltic to pool power from offshore wind farms and feed power to different countries.

    The European wind industry benefits from centralized planning, regulation, and markets, said Johannes P. Pfeifenberger, a principal of The Brattle Group. “The grid planning process in the U.S. is not suitable today to find cost-effective solutions to get us to a clean energy grid in time,” he said. Pfeifenberger recommended that the United States immediately pursue a series of moves including a multistate agreement for cooperating on offshore wind and establishment by grid operators of compatible transmission technologies.

    Symposium speakers expressed sharp concerns that complicated and prolonged approvals, as well as partisan politics, could hobble the nation’s nascent offshore industry. “You can develop whatever you want and agree on what you’re doing, and then the people in charge change, and everything falls apart,” said Weinstein. “We can’t slow down, and we actually need to accelerate.”

    Larry Susskind, the Ford Professor of Urban and Environmental Planning, had ideas for breaking through permitting and political gridlock. A negotiations expert, he suggested convening confidential meetings for stakeholders with competing interests for collaborative problem-solving sessions. He announced the creation of a Renewable Energy Facility Siting Clinic at MIT. “We get people to agree that there is a problem, and to accept that without a solution, the system won’t work in the future, and we have to start fixing it now.”

    Other symposium participants were more sanguine about the success of offshore wind. “Trust me, floating wind is not a pie-in-the-sky, exotic technology that is difficult to implement,” said Sclavounos. “There will be companies investing in this technology because it produces huge amounts of energy, and even though the process may not be streamlined, the economics will work itself out.” More

  • in

    An interdisciplinary approach to fighting climate change through clean energy solutions

    In early 2021, the U.S. government set an ambitious goal: to decarbonize its power grid, the system that generates and transmits electricity throughout the country, by 2035. It’s an important goal in the fight against climate change, and will require a switch from current, greenhouse-gas producing energy sources (such as coal and natural gas), to predominantly renewable ones (such as wind and solar).

    Getting the power grid to zero carbon will be a challenging undertaking, as Audun Botterud, a principal research scientist at the MIT Laboratory for Information and Decision Systems (LIDS) who has long been interested in the problem, knows well. It will require building lots of renewable energy generators and new infrastructure; designing better technology to capture, store, and carry electricity; creating the right regulatory and economic incentives; and more. Decarbonizing the grid also presents many computational challenges, which is where Botterud’s focus lies. Botterud has modeled different aspects of the grid — the mechanics of energy supply, demand, and storage, and electricity markets — where economic factors can have a huge effect on how quickly renewable solutions get adopted.

    On again, off again

    A major challenge of decarbonization is that the grid must be designed and operated to reliably meet demand. Using renewable energy sources complicates this, as wind and solar power depend on an infamously volatile system: the weather. A sunny day becomes gray and blustery, and wind turbines get a boost but solar farms go idle. This will make the grid’s energy supply variable and hard to predict. Additional resources, including batteries and backup power generators, will need to be incorporated to regulate supply. Extreme weather events, which are becoming more common with climate change, can further strain both supply and demand. Managing a renewables-driven grid will require algorithms that can minimize uncertainty in the face of constant, sometimes random fluctuations to make better predictions of supply and demand, guide how resources are added to the grid, and inform how those resources are committed and dispatched across the entire United States.

    “The problem of managing supply and demand in the grid has to happen every second throughout the year, and given how much we rely on electricity in society, we need to get this right,” Botterud says. “You cannot let the reliability drop as you increase the amount of renewables, especially because I think that will lead to resistance towards adopting renewables.”

    That is why Botterud feels fortunate to be working on the decarbonization problem at LIDS — even though a career here is not something he had originally planned. Botterud’s first experience with MIT came during his time as a graduate student in his home country of Norway, when he spent a year as a visiting student with what is now called the MIT Energy Initiative. He might never have returned, except that while at MIT, Botterud met his future wife, Bilge Yildiz. The pair both ended up working at the Argonne National Laboratory outside of Chicago, with Botterud focusing on challenges related to power systems and electricity markets. Then Yildiz got a faculty position at MIT, where she is a professor of nuclear and materials science and engineering. Botterud moved back to the Cambridge area with her and continued to work for Argonne remotely, but he also kept an eye on local opportunities. Eventually, a position at LIDS became available, and Botterud took it, while maintaining his connections to Argonne.

    “At first glance, it may not be an obvious fit,” Botterud says. “My work is very focused on a specific application, power system challenges, and LIDS tends to be more focused on fundamental methods to use across many different application areas. However, being at LIDS, my lab [the Energy Analytics Group] has access to the most recent advances in these fundamental methods, and we can apply them to power and energy problems. Other people at LIDS are working on energy too, so there is growing momentum to address these important problems.”

    Weather, space, and time

    Much of Botterud’s research involves optimization, using mathematical programming to compare alternatives and find the best solution. Common computational challenges include dealing with large geographical areas that contain regions with different weather, different types and quantities of renewable energy available, and different infrastructure and consumer needs — such as the entire United States. Another challenge is the need for granular time resolution, sometimes even down to the sub-second level, to account for changes in energy supply and demand.

    Often, Botterud’s group will use decomposition to solve such large problems piecemeal and then stitch together solutions. However, it’s also important to consider systems as a whole. For example, in a recent paper, Botterud’s lab looked at the effect of building new transmission lines as part of national decarbonization. They modeled solutions assuming coordination at the state, regional, or national level, and found that the more regions coordinate to build transmission infrastructure and distribute electricity, the less they will need to spend to reach zero carbon.

    In other projects, Botterud uses game theory approaches to study strategic interactions in electricity markets. For example, he has designed agent-based models to analyze electricity markets. These assume each actor will make strategic decisions in their own best interest and then simulate interactions between them. Interested parties can use the models to see what would happen under different conditions and market rules, which may lead companies to make different investment decisions, or governing bodies to issue different regulations and incentives. These choices can shape how quickly the grid gets decarbonized.

    Botterud is also collaborating with researchers in MIT’s chemical engineering department who are working on improving battery storage technologies. Batteries will help manage variable renewable energy supply by capturing surplus energy during periods of high generation to release during periods of insufficient generation. Botterud’s group models the sort of charge cycles that batteries are likely to experience in the power grid, so that chemical engineers in the lab can test their batteries’ abilities in more realistic scenarios. In turn, this also leads to a more realistic representation of batteries in power system optimization models.

    These are only some of the problems that Botterud works on. He enjoys the challenge of tackling a spectrum of different projects, collaborating with everyone from engineers to architects to economists. He also believes that such collaboration leads to better solutions. The problems created by climate change are myriad and complex, and solving them will require researchers to cooperate and explore.

    “In order to have a real impact on interdisciplinary problems like energy and climate,” Botterud says, “you need to get outside of your research sweet spot and broaden your approach.” More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    New maps show airplane contrails over the U.S. dropped steeply in 2020

    As Covid-19’s initial wave crested around the world, travel restrictions and a drop in passengers led to a record number of grounded flights in 2020. The air travel reduction cleared the skies of not just jets but also the fluffy white contrails they produce high in the atmosphere.

    MIT engineers have mapped the contrails that were generated over the United States in 2020, and compared the results to prepandemic years. They found that on any given day in 2018, and again in 2019, contrails covered a total area equal to Massachusetts and Connecticut combined. In 2020, this contrail coverage shrank by about 20 percent, mirroring a similar drop in U.S. flights.  

    While 2020’s contrail dip may not be surprising, the findings are proof that the team’s mapping technique works. Their study marks the first time researchers have captured the fine and ephemeral details of contrails over a large continental scale.

    Now, the researchers are applying the technique to predict where in the atmosphere contrails are likely to form. The cloud-like formations are known to play a significant role in aviation-related global warming. The team is working with major airlines to forecast regions in the atmosphere where contrails may form, and to reroute planes around these regions to minimize contrail production.

    “This kind of technology can help divert planes to prevent contrails, in real time,” says Steven Barrett, professor and associate head of MIT’s Department of Aeronautics and Astronautics. “There’s an unusual opportunity to halve aviation’s climate impact by eliminating most of the contrails produced today.”

    Barrett and his colleagues have published their results today in the journal Environmental Research Letters. His co-authors at MIT include graduate student Vincent Meijer, former graduate student Luke Kulik, research scientists Sebastian Eastham, Florian Allroggen, and Raymond Speth, and LIDS Director and professor Sertac Karaman.

    Trail training

    About half of the aviation industry’s contribution to global warming comes directly from planes’ carbon dioxide emissions. The other half is thought to be a consequence of their contrails. The signature white tails are produced when a plane’s hot, humid exhaust mixes with cool humid air high in the atmosphere. Emitted in thin lines, contrails quickly spread out and can act as blankets that trap the Earth’s outgoing heat.

    While a single contrail may not have much of a warming effect, taken together contrails have a significant impact. But the estimates of this effect are uncertain and based on computer modeling as well as limited satellite data. What’s more, traditional computer vision algorithms that analyze contrail data have a hard time discerning the wispy tails from natural clouds.

    To precisely pick out and track contrails over a large scale, the MIT team looked to images taken by NASA’s GOES-16, a geostationary satellite that hovers over the same swath of the Earth, including the United States, taking continuous, high-resolution images.

    The team first obtained about 100 images taken by the satellite, and trained a set of people to interpret remote sensing data and label each image’s pixel as either part of a contrail or not. They used this labeled dataset to train a computer-vision algorithm to discern a contrail from a cloud or other image feature.

    The researchers then ran the algorithm on about 100,000 satellite images, amounting to nearly 6 trillion pixels, each pixel representing an area of about 2 square kilometers. The images covered the contiguous U.S., along with parts of Canada and Mexico, and were taken about every 15 minutes, between Jan. 1, 2018, and Dec. 31, 2020.

    The algorithm automatically classified each pixel as either a contrail or not a contrail, and generated daily maps of contrails over the United States. These maps mirrored the major flight paths of most U.S. airlines, with some notable differences. For instance, contrail “holes” appeared around major airports, which reflects the fact that planes landing and taking off around airports are generally not high enough in the atmosphere for contrails to form.

    “The algorithm knows nothing about where planes fly, and yet when processing the satellite imagery, it resulted in recognizable flight routes,” Barrett says. “That’s one piece of evidence that says this method really does capture contrails over a large scale.”

    Cloudy patterns

    Based on the algorithm’s maps, the researchers calculated the total area covered each day by contrails in the US. On an average day in 2018 and in 2019, U.S. contrails took up about 43,000 square kilometers. This coverage dropped by 20 percent in March of 2020 as the pandemic set in. From then on, contrails slowly reappeared as air travel resumed through the year.

    The team also observed daily and seasonal patterns. In general, contrails appeared to peak in the morning and decline in the afternoon. This may be a training artifact: As natural cirrus clouds are more likely to form in the afternoon, the algorithm may have trouble discerning contrails amid the clouds later in the day. But it might also be an important indication about when contrails form most. Contrails also peaked in late winter and early spring, when more of the air is naturally colder and more conducive for contrail formation.

    The team has now adapted the technique to predict where contrails are likely to form in real time. Avoiding these regions, Barrett says, could take a significant, almost immediate chunk out of aviation’s global warming contribution.  

    “Most measures to make aviation sustainable take a long time,” Barrett says. “(Contrail avoidance) could be accomplished in a few years, because it requires small changes to how aircraft are flown, with existing airplanes and observational technology. It’s a near-term way of reducing aviation’s warming by about half.”

    The team is now working towards this objective of large-scale contrail avoidance using realtime satellite observations.

    This research was supported in part by NASA and the MIT Environmental Solutions Initiative. More

  • in

    Preparing global online learners for the clean energy transition

    After a career devoted to making the electric power system more efficient and resilient, Marija Ilic came to MIT in 2018 eager not just to extend her research in new directions, but to prepare a new generation for the challenges of the clean-energy transition.

    To that end, Ilic, a senior research scientist in MIT’s Laboratory for Information and Decisions Systems (LIDS) and a senior staff member at Lincoln Laboratory in the Energy Systems Group, designed an edX course that captures her methods and vision: Principles of Modeling, Simulation, and Control for Electric Energy Systems.

    EdX is a provider of massive open online courses produced in partnership with MIT, Harvard University, and other leading universities. Ilic’s class made its online debut in June 2021, running for 12 weeks, and it is one of an expanding set of online courses funded by the MIT Energy Initiative (MITEI) to provide global learners with a view of the shifting energy landscape.

    Ilic first taught a version of the class while a professor at Carnegie Mellon University, rolled out a second iteration at MIT just as the pandemic struck, and then revamped the class for its current online presentation. But no matter the course location, Ilic focuses on a central theme: “With the need for decarbonization, which will mean accommodating new energy sources such as solar and wind, we must rethink how we operate power systems,” she says. “This class is about how to pose and solve the kinds of problems we will face during this transformation.”

    Hot global topic

    The edX class has been designed to welcome a broad mix of students. In summer 2021, more than 2,000 signed up from 109 countries, ranging from high school students to retirees. In surveys, some said they were drawn to the class by the opportunity to advance their knowledge of modeling. Many others hoped to learn about the move to decarbonize energy systems.

    “The energy transition is a hot topic everywhere in the world, not just in the U.S.,” says teaching assistant Miroslav Kosanic. “In the class, there were veterans of the oil industry and others working in investment and finance jobs related to energy who wanted to understand the potential impacts of changes in energy systems, as well as students from different fields and professors seeking to update their curricula — all gathered into a community.”

    Kosanic, who is currently a PhD student at MIT in electrical engineering and computer science, had taken this class remotely in the spring semester of 2021, while he was still in college in Serbia. “I knew I was interested in power systems, but this course was eye-opening for me, showing how to apply control theory and to model different components of these systems,” he says. “I finished the course and thought, this is just the beginning, and I’d like to learn a lot more.” Kosanic performed so well online that Ilic recruited him to MIT, as a LIDS researcher and edX course teaching assistant, where he grades homework assignments and moderates a lively learner community forum.

    A platform for problem-solving

    The course starts with fundamental concepts in electric power systems operations and management, and it steadily adds layers of complexity, posing real-world problems along the way. Ilic explains how voltage travels from point to point across transmission lines and how grid managers modulate systems to ensure that enough, but not too much, electricity flows. “To deliver power from one location to the next one, operators must constantly make adjustments to ensure that the receiving end can handle the voltage transmitted, optimizing voltage to avoid overheating the wires,” she says.

    In her early lectures, Ilic notes the fundamental constraints of current grid operations, organized around a hierarchy of regional managers dealing with a handful of very large oil, gas, coal, and nuclear power plants, and occupied primarily with the steady delivery of megawatt-hours to far-flung customers. But historically, this top-down structure doesn’t do a good job of preventing loss of energy due to sub-optimal transmission conditions or due to outages related to extreme weather events.

    These issues promise to grow for grid operators as distributed resources such as solar and wind enter the picture, Ilic tells students. In the United States, under new rules dictated by the Federal Energy Regulatory Commission, utilities must begin to integrate the distributed, intermittent electricity produced by wind farms, solar complexes, and even by homes and cars, which flows at voltages much lower than electricity produced by large power plants.

    Finding ways to optimize existing energy systems and to accommodate low- and zero-carbon energy sources requires powerful new modes of analysis and problem-solving. This is where Ilic’s toolbox comes in: a mathematical modeling strategy and companion software that simplifies the input and output of electrical systems, no matter how large or how small. “In the last part of the course, we take up modeling different solutions to electric service in a way that is technology-agnostic, where it only matters how much a black-box energy source produces, and the rates of production and consumption,” says Ilic.

    This black-box modeling approach, which Ilic pioneered in her research, enables students to see, for instance, “what is happening with their own household consumption, and how it affects the larger system,” says Rupamathi Jaddivada PhD ’20, a co-instructor of the edX class and a postdoc in electrical engineering and computer science. “Without getting lost in details of current or voltage, or how different components work, we think about electric energy systems as dynamical components interacting with each other, at different spatial scales.” This means that with just a basic knowledge of physical laws, high school and undergraduate students can take advantage of the course “and get excited about cleaner and more reliable energy,” adds Ilic.

    What Jaddivada and Ilic describe as “zoom in, zoom out” systems thinking leverages the ubiquity of digital communications and the so-called “internet of things.” Energy devices of all scales can link directly to other devices in a network instead of just to a central operations hub, allowing for real-time adjustments in voltage, for instance, vastly improving the potential for optimizing energy flows.

    “In the course, we discuss how information exchange will be key to integrating new end-to-end energy resources and, because of this interactivity, how we can model better ways of controlling entire energy networks,” says Ilic. “It’s a big lesson of the course to show the value of information and software in enabling us to decarbonize the system and build resilience, rather than just building hardware.”

    By the end of the course, students are invited to pursue independent research projects. Some might model the impact of a new energy source on a local grid or investigate different options for reducing energy loss in transmission lines.

    “It would be nice if they see that we don’t have to rely on hardware or large-scale solutions to bring about improved electric service and a clean and resilient grid, but instead on information technologies such as smart components exchanging data in real time, or microgrids in neighborhoods that sustain themselves even when they lose power,” says Ilic. “I hope students walk away convinced that it does make sense to rethink how we operate our basic power systems and that with systematic, physics-based modeling and IT methods we can enable better, more flexible operation in the future.”

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative More