More stories

  • in

    AI pilot programs look to reduce energy use and emissions on MIT campus

    Smart thermostats have changed the way many people heat and cool their homes by using machine learning to respond to occupancy patterns and preferences, resulting in a lower energy draw. This technology — which can collect and synthesize data — generally focuses on single-dwelling use, but what if this type of artificial intelligence could dynamically manage the heating and cooling of an entire campus? That’s the idea behind a cross-departmental effort working to reduce campus energy use through AI building controls that respond in real-time to internal and external factors. 

    Understanding the challenge

    Heating and cooling can be an energy challenge for campuses like MIT, where existing building management systems (BMS) can’t respond quickly to internal factors like occupancy fluctuations or external factors such as forecast weather or the carbon intensity of the grid. This results in using more energy than needed to heat and cool spaces, often to sub-optimal levels. By engaging AI, researchers have begun to establish a framework to understand and predict optimal temperature set points (the temperature at which a thermostat has been set to maintain) at the individual room level and take into consideration a host of factors, allowing the existing systems to heat and cool more efficiently, all without manual intervention. 

    “It’s not that different from what folks are doing in houses,” explains Les Norford, a professor of architecture at MIT, whose work in energy studies, controls, and ventilation connected him with the effort. “Except we have to think about things like how long a classroom may be used in a day, weather predictions, time needed to heat and cool a room, the effect of the heat from the sun coming in the window, and how the classroom next door might impact all of this.” These factors are at the crux of the research and pilots that Norford and a team are focused on. That team includes Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; Audun Botterud, principal research scientist for the Laboratory for Information and Decision Systems; Steve Lanou, project manager in the MIT Office of Sustainability (MITOS); Fran Selvaggio, Department of Facilities Senior Building Management Systems engineer; and Daisy Green and You Lin, both postdocs.

    The group is organized around the call to action to “explore possibilities to employ artificial intelligence to reduce on-campus energy consumption” outlined in Fast Forward: MIT’s Climate Action Plan for the Decade, but efforts extend back to 2019. “As we work to decarbonize our campus, we’re exploring all avenues,” says Vice President for Campus Services and Stewardship Joe Higgins, who originally pitched the idea to students at the 2019 MIT Energy Hack. “To me, it was a great opportunity to utilize MIT expertise and see how we can apply it to our campus and share what we learn with the building industry.” Research into the concept kicked off at the event and continued with undergraduate and graduate student researchers running differential equations and managing pilots to test the bounds of the idea. Soon, Gregory, who is also a MITOS faculty fellow, joined the project and helped identify other individuals to join the team. “My role as a faculty fellow is to find opportunities to connect the research community at MIT with challenges MIT itself is facing — so this was a perfect fit for that,” Gregory says. 

    Early pilots of the project focused on testing thermostat set points in NW23, home to the Department of Facilities and Office of Campus Planning, but Norford quickly realized that classrooms provide many more variables to test, and the pilot was expanded to Building 66, a mixed-use building that is home to classrooms, offices, and lab spaces. “We shifted our attention to study classrooms in part because of their complexity, but also the sheer scale — there are hundreds of them on campus, so [they offer] more opportunities to gather data and determine parameters of what we are testing,” says Norford. 

    Developing the technology

    The work to develop smarter building controls starts with a physics-based model using differential equations to understand how objects can heat up or cool down, store heat, and how the heat may flow across a building façade. External data like weather, carbon intensity of the power grid, and classroom schedules are also inputs, with the AI responding to these conditions to deliver an optimal thermostat set point each hour — one that provides the best trade-off between the two objectives of thermal comfort of occupants and energy use. That set point then tells the existing BMS how much to heat up or cool down a space. Real-life testing follows, surveying building occupants about their comfort. Botterud, whose research focuses on the interactions between engineering, economics, and policy in electricity markets, works to ensure that the AI algorithms can then translate this learning into energy and carbon emission savings. 

    Currently the pilots are focused on six classrooms within Building 66, with the intent to move onto lab spaces before expanding to the entire building. “The goal here is energy savings, but that’s not something we can fully assess until we complete a whole building,” explains Norford. “We have to work classroom by classroom to gather the data, but are looking at a much bigger picture.” The research team used its data-driven simulations to estimate significant energy savings while maintaining thermal comfort in the six classrooms over two days, but further work is needed to implement the controls and measure savings across an entire year. 

    With significant savings estimated across individual classrooms, the energy savings derived from an entire building could be substantial, and AI can help meet that goal, explains Botterud: “This whole concept of scalability is really at the heart of what we are doing. We’re spending a lot of time in Building 66 to figure out how it works and hoping that these algorithms can be scaled up with much less effort to other rooms and buildings so solutions we are developing can make a big impact at MIT,” he says.

    Part of that big impact involves operational staff, like Selvaggio, who are essential in connecting the research to current operations and putting them into practice across campus. “Much of the BMS team’s work is done in the pilot stage for a project like this,” he says. “We were able to get these AI systems up and running with our existing BMS within a matter of weeks, allowing the pilots to get off the ground quickly.” Selvaggio says in preparation for the completion of the pilots, the BMS team has identified an additional 50 buildings on campus where the technology can easily be installed in the future to start energy savings. The BMS team also collaborates with the building automation company, Schneider Electric, that has implemented the new control algorithms in Building 66 classrooms and is ready to expand to new pilot locations. 

    Expanding impact

    The successful completion of these programs will also open the possibility for even greater energy savings — bringing MIT closer to its decarbonization goals. “Beyond just energy savings, we can eventually turn our campus buildings into a virtual energy network, where thousands of thermostats are aggregated and coordinated to function as a unified virtual entity,” explains Higgins. These types of energy networks can accelerate power sector decarbonization by decreasing the need for carbon-intensive power plants at peak times and allowing for more efficient power grid energy use.

    As pilots continue, they fulfill another call to action in Fast Forward — for campus to be a “test bed for change.” Says Gregory: “This project is a great example of using our campus as a test bed — it brings in cutting-edge research to apply to decarbonizing our own campus. It’s a great project for its specific focus, but also for serving as a model for how to utilize the campus as a living lab.” More

  • in

    An interdisciplinary approach to fighting climate change through clean energy solutions

    In early 2021, the U.S. government set an ambitious goal: to decarbonize its power grid, the system that generates and transmits electricity throughout the country, by 2035. It’s an important goal in the fight against climate change, and will require a switch from current, greenhouse-gas producing energy sources (such as coal and natural gas), to predominantly renewable ones (such as wind and solar).

    Getting the power grid to zero carbon will be a challenging undertaking, as Audun Botterud, a principal research scientist at the MIT Laboratory for Information and Decision Systems (LIDS) who has long been interested in the problem, knows well. It will require building lots of renewable energy generators and new infrastructure; designing better technology to capture, store, and carry electricity; creating the right regulatory and economic incentives; and more. Decarbonizing the grid also presents many computational challenges, which is where Botterud’s focus lies. Botterud has modeled different aspects of the grid — the mechanics of energy supply, demand, and storage, and electricity markets — where economic factors can have a huge effect on how quickly renewable solutions get adopted.

    On again, off again

    A major challenge of decarbonization is that the grid must be designed and operated to reliably meet demand. Using renewable energy sources complicates this, as wind and solar power depend on an infamously volatile system: the weather. A sunny day becomes gray and blustery, and wind turbines get a boost but solar farms go idle. This will make the grid’s energy supply variable and hard to predict. Additional resources, including batteries and backup power generators, will need to be incorporated to regulate supply. Extreme weather events, which are becoming more common with climate change, can further strain both supply and demand. Managing a renewables-driven grid will require algorithms that can minimize uncertainty in the face of constant, sometimes random fluctuations to make better predictions of supply and demand, guide how resources are added to the grid, and inform how those resources are committed and dispatched across the entire United States.

    “The problem of managing supply and demand in the grid has to happen every second throughout the year, and given how much we rely on electricity in society, we need to get this right,” Botterud says. “You cannot let the reliability drop as you increase the amount of renewables, especially because I think that will lead to resistance towards adopting renewables.”

    That is why Botterud feels fortunate to be working on the decarbonization problem at LIDS — even though a career here is not something he had originally planned. Botterud’s first experience with MIT came during his time as a graduate student in his home country of Norway, when he spent a year as a visiting student with what is now called the MIT Energy Initiative. He might never have returned, except that while at MIT, Botterud met his future wife, Bilge Yildiz. The pair both ended up working at the Argonne National Laboratory outside of Chicago, with Botterud focusing on challenges related to power systems and electricity markets. Then Yildiz got a faculty position at MIT, where she is a professor of nuclear and materials science and engineering. Botterud moved back to the Cambridge area with her and continued to work for Argonne remotely, but he also kept an eye on local opportunities. Eventually, a position at LIDS became available, and Botterud took it, while maintaining his connections to Argonne.

    “At first glance, it may not be an obvious fit,” Botterud says. “My work is very focused on a specific application, power system challenges, and LIDS tends to be more focused on fundamental methods to use across many different application areas. However, being at LIDS, my lab [the Energy Analytics Group] has access to the most recent advances in these fundamental methods, and we can apply them to power and energy problems. Other people at LIDS are working on energy too, so there is growing momentum to address these important problems.”

    Weather, space, and time

    Much of Botterud’s research involves optimization, using mathematical programming to compare alternatives and find the best solution. Common computational challenges include dealing with large geographical areas that contain regions with different weather, different types and quantities of renewable energy available, and different infrastructure and consumer needs — such as the entire United States. Another challenge is the need for granular time resolution, sometimes even down to the sub-second level, to account for changes in energy supply and demand.

    Often, Botterud’s group will use decomposition to solve such large problems piecemeal and then stitch together solutions. However, it’s also important to consider systems as a whole. For example, in a recent paper, Botterud’s lab looked at the effect of building new transmission lines as part of national decarbonization. They modeled solutions assuming coordination at the state, regional, or national level, and found that the more regions coordinate to build transmission infrastructure and distribute electricity, the less they will need to spend to reach zero carbon.

    In other projects, Botterud uses game theory approaches to study strategic interactions in electricity markets. For example, he has designed agent-based models to analyze electricity markets. These assume each actor will make strategic decisions in their own best interest and then simulate interactions between them. Interested parties can use the models to see what would happen under different conditions and market rules, which may lead companies to make different investment decisions, or governing bodies to issue different regulations and incentives. These choices can shape how quickly the grid gets decarbonized.

    Botterud is also collaborating with researchers in MIT’s chemical engineering department who are working on improving battery storage technologies. Batteries will help manage variable renewable energy supply by capturing surplus energy during periods of high generation to release during periods of insufficient generation. Botterud’s group models the sort of charge cycles that batteries are likely to experience in the power grid, so that chemical engineers in the lab can test their batteries’ abilities in more realistic scenarios. In turn, this also leads to a more realistic representation of batteries in power system optimization models.

    These are only some of the problems that Botterud works on. He enjoys the challenge of tackling a spectrum of different projects, collaborating with everyone from engineers to architects to economists. He also believes that such collaboration leads to better solutions. The problems created by climate change are myriad and complex, and solving them will require researchers to cooperate and explore.

    “In order to have a real impact on interdisciplinary problems like energy and climate,” Botterud says, “you need to get outside of your research sweet spot and broaden your approach.” More

  • in

    Integrating humans with AI in structural design

    Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

    But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

    Now, researchers at MIT have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

    The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

    The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

    “It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

    “You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions the human touch is essential.

    As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

    The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

    While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

    The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

    Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

    Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

    “The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

    “By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.” More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Manufacturing a cleaner future

    Manufacturing had a big summer. The CHIPS and Science Act, signed into law in August, represents a massive investment in U.S. domestic manufacturing. The act aims to drastically expand the U.S. semiconductor industry, strengthen supply chains, and invest in R&D for new technological breakthroughs. According to John Hart, professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT, the CHIPS Act is just the latest example of significantly increased interest in manufacturing in recent years.

    “You have multiple forces working together: reflections from the pandemic’s impact on supply chains, the geopolitical situation around the world, and the urgency and importance of sustainability,” says Hart. “This has now aligned incentives among government, industry, and the investment community to accelerate innovation in manufacturing and industrial technology.”

    Hand-in-hand with this increased focus on manufacturing is a need to prioritize sustainability.

    Roughly one-quarter of greenhouse gas emissions came from industry and manufacturing in 2020. Factories and plants can also deplete local water reserves and generate vast amounts of waste, some of which can be toxic.

    To address these issues and drive the transition to a low-carbon economy, new products and industrial processes must be developed alongside sustainable manufacturing technologies. Hart sees mechanical engineers as playing a crucial role in this transition.

    “Mechanical engineers can uniquely solve critical problems that require next-generation hardware technologies, and know how to bring their solutions to scale,” says Hart.

    Several fast-growing companies founded by faculty and alumni from MIT’s Department of Mechanical Engineering offer solutions for manufacturing’s environmental problem, paving the path for a more sustainable future.

    Gradiant: Cleantech water solutions

    Manufacturing requires water, and lots of it. A medium-sized semiconductor fabrication plant uses upward of 10 million gallons of water a day. In a world increasingly plagued by droughts, this dependence on water poses a major challenge.

    Gradiant offers a solution to this water problem. Co-founded by Anurag Bajpayee SM ’08, PhD ’12 and Prakash Govindan PhD ’12, the company is a pioneer in sustainable — or “cleantech” — water projects.

    As doctoral students in the Rohsenow Kendall Heat Transfer Laboratory, Bajpayee and Govindan shared a pragmatism and penchant for action. They both worked on desalination research — Bajpayee with Professor Gang Chen and Govindan with Professor John Lienhard.

    Inspired by a childhood spent during a severe drought in Chennai, India, Govindan developed for his PhD a humidification-dehumidification technology that mimicked natural rainfall cycles. It was with this piece of technology, which they named Carrier Gas Extraction (CGE), that the duo founded Gradiant in 2013.

    The key to CGE lies in a proprietary algorithm that accounts for variability in the quality and quantity in wastewater feed. At the heart of the algorithm is a nondimensional number, which Govindan proposes one day be called the “Lienhard Number,” after his doctoral advisor.

    “When the water quality varies in the system, our technology automatically sends a signal to motors within the plant to adjust the flow rates to bring back the nondimensional number to a value of one. Once it’s brought back to a value of one, you’re running in optimal condition,” explains Govindan, who serves as chief operating officer of Gradiant.

    This system can treat and clean the wastewater produced by a manufacturing plant for reuse, ultimately conserving millions of gallons of water each year.

    As the company has grown, the Gradiant team has added new technologies to their arsenal, including Selective Contaminant Extraction, a cost-efficient method that removes only specific contaminants, and a brine-concentration method called Counter-Flow Reverse Osmosis. They now offer a full technology stack of water and wastewater treatment solutions to clients in industries including pharmaceuticals, energy, mining, food and beverage, and the ever-growing semiconductor industry.

    “We are an end-to-end water solutions provider. We have a portfolio of proprietary technologies and will pick and choose from our ‘quiver’ depending on a customer’s needs,” says Bajpayee, who serves as CEO of Gradiant. “Customers look at us as their water partner. We can take care of their water problem end-to-end so they can focus on their core business.”

    Gradiant has seen explosive growth over the past decade. With 450 water and wastewater treatment plants built to date, they treat the equivalent of 5 million households’ worth of water each day. Recent acquisitions saw their total employees rise to above 500.

    The diversity of Gradiant’s solutions is reflected in their clients, who include Pfizer, AB InBev, and Coca-Cola. They also count semiconductor giants like Micron Technology, GlobalFoundries, Intel, and TSMC among their customers.

    “Over the last few years, we have really developed our capabilities and reputation serving semiconductor wastewater and semiconductor ultrapure water,” says Bajpayee.

    Semiconductor manufacturers require ultrapure water for fabrication. Unlike drinking water, which has a total dissolved solids range in the parts per million, water used to manufacture microchips has a range in the parts per billion or quadrillion.

    Currently, the average recycling rate at semiconductor fabrication plants — or fabs — in Singapore is only 43 percent. Using Gradiant’s technologies, these fabs can recycle 98-99 percent of the 10 million gallons of water they require daily. This reused water is pure enough to be put back into the manufacturing process.

    “What we’ve done is eliminated the discharge of this contaminated water and nearly eliminated the dependence of the semiconductor fab on the public water supply,” adds Bajpayee.

    With new regulations being introduced, pressure is increasing for fabs to improve their water use, making sustainability even more important to brand owners and their stakeholders.

    As the domestic semiconductor industry expands in light of the CHIPS and Science Act, Gradiant sees an opportunity to bring their semiconductor water treatment technologies to more factories in the United States.

    Via Separations: Efficient chemical filtration

    Like Bajpayee and Govindan, Shreya Dave ’09, SM ’12, PhD ’16 focused on desalination for her doctoral thesis. Under the guidance of her advisor Jeffrey Grossman, professor of materials science and engineering, Dave built a membrane that could enable more efficient and cheaper desalination.

    A thorough cost and market analysis brought Dave to the conclusion that the desalination membrane she developed would not make it to commercialization.

    “The current technologies are just really good at what they do. They’re low-cost, mass produced, and they worked. There was no room in the market for our technology,” says Dave.

    Shortly after defending her thesis, she read a commentary article in the journal Nature that changed everything. The article outlined a problem. Chemical separations that are central to many manufacturing processes require a huge amount of energy. Industry needed more efficient and cheaper membranes. Dave thought she might have a solution.

    After determining there was an economic opportunity, Dave, Grossman, and Brent Keller PhD ’16 founded Via Separations in 2017. Shortly thereafter, they were chosen as one of the first companies to receive funding from MIT’s venture firm, The Engine.

    Currently, industrial filtration is done by heating chemicals at very high temperatures to separate compounds. Dave likens it to making pasta by boiling all of the water off until it evaporates and all you are left with is the pasta noodles. In manufacturing, this method of chemical separation is extremely energy-intensive and inefficient.

    Via Separations has created the chemical equivalent of a “pasta strainer.” Rather than using heat to separate, their membranes “strain” chemical compounds. This method of chemical filtration uses 90 percent less energy than standard methods.

    While most membranes are made of polymers, Via Separations’ membranes are made with graphene oxide, which can withstand high temperatures and harsh conditions. The membrane is calibrated to the customer’s needs by altering the pore size and tuning the surface chemistry.

    Currently, Dave and her team are focusing on the pulp and paper industry as their beachhead market. They have developed a system that makes the recovery of a substance known as “black liquor” more energy efficient.

    “When tree becomes paper, only one-third of the biomass is used for the paper. Currently the most valuable use for the remaining two-thirds not needed for paper is to take it from a pretty dilute stream to a pretty concentrated stream using evaporators by boiling off the water,” says Dave.

    This black liquor is then burned. Most of the resulting energy is used to power the filtration process.

    “This closed-loop system accounts for an enormous amount of energy consumption in the U.S. We can make that process 84 percent more efficient by putting the ‘pasta strainer’ in front of the boiler,” adds Dave.

    VulcanForms: Additive manufacturing at industrial scale

    The first semester John Hart taught at MIT was a fruitful one. He taught a course on 3D printing, broadly known as additive manufacturing (AM). While it wasn’t his main research focus at the time, he found the topic fascinating. So did many of the students in the class, including Martin Feldmann MEng ’14.

    After graduating with his MEng in advanced manufacturing, Feldmann joined Hart’s research group full time. There, they bonded over their shared interest in AM. They saw an opportunity to innovate with an established metal AM technology, known as laser powder bed fusion, and came up with a concept to realize metal AM at an industrial scale.

    The pair co-founded VulcanForms in 2015.

    “We have developed a machine architecture for metal AM that can build parts with exceptional quality and productivity,” says Hart. “And, we have integrated our machines in a fully digital production system, combining AM, postprocessing, and precision machining.”

    Unlike other companies that sell 3D printers for others to produce parts, VulcanForms makes and sells parts for their customers using their fleet of industrial machines. VulcanForms has grown to nearly 400 employees. Last year, the team opened their first production factory, known as “VulcanOne,” in Devens, Massachusetts.

    The quality and precision with which VulcanForms produces parts is critical for products like medical implants, heat exchangers, and aircraft engines. Their machines can print layers of metal thinner than a human hair.

    “We’re producing components that are difficult, or in some cases impossible to manufacture otherwise,” adds Hart, who sits on the company’s board of directors.

    The technologies developed at VulcanForms may help lead to a more sustainable way to manufacture parts and products, both directly through the additive process and indirectly through more efficient, agile supply chains.

    One way that VulcanForms, and AM in general, promotes sustainability is through material savings.

    Many of the materials VulcanForms uses, such as titanium alloys, require a great deal of energy to produce. When titanium parts are 3D-printed, substantially less of the material is used than in a traditional machining process. This material efficiency is where Hart sees AM making a large impact in terms of energy savings.

    Hart also points out that AM can accelerate innovation in clean energy technologies, ranging from more efficient jet engines to future fusion reactors.

    “Companies seeking to de-risk and scale clean energy technologies require know-how and access to advanced manufacturing capability, and industrial additive manufacturing is transformative in this regard,” Hart adds.

    LiquiGlide: Reducing waste by removing friction

    There is an unlikely culprit when it comes to waste in manufacturing and consumer products: friction. Kripa Varanasi, professor of mechanical engineering, and the team at LiquiGlide are on a mission to create a frictionless future, and substantially reduce waste in the process.

    Founded in 2012 by Varanasi and alum David Smith SM ’11, LiquiGlide designs custom coatings that enable liquids to “glide” on surfaces. Every last drop of a product can be used, whether it’s being squeezed out of a tube of toothpaste or drained from a 500-liter tank at a manufacturing plant. Making containers frictionless substantially minimizes wasted product, and eliminates the need to clean a container before recycling or reusing.

    Since launching, the company has found great success in consumer products. Customer Colgate utilized LiquiGlide’s technologies in the design of the Colgate Elixir toothpaste bottle, which has been honored with several industry awards for design. In a collaboration with world- renowned designer Yves Béhar, LiquiGlide is applying their technology to beauty and personal care product packaging. Meanwhile, the U.S. Food and Drug Administration has granted them a Device Master Filing, opening up opportunities for the technology to be used in medical devices, drug delivery, and biopharmaceuticals.

    In 2016, the company developed a system to make manufacturing containers frictionless. Called CleanTanX, the technology is used to treat the surfaces of tanks, funnels, and hoppers, preventing materials from sticking to the side. The system can reduce material waste by up to 99 percent.

    “This could really change the game. It saves wasted product, reduces wastewater generated from cleaning tanks, and can help make the manufacturing process zero-waste,” says Varanasi, who serves as chair at LiquiGlide.

    LiquiGlide works by creating a coating made of a textured solid and liquid lubricant on the container surface. When applied to a container, the lubricant remains infused within the texture. Capillary forces stabilize and allow the liquid to spread on the surface, creating a continuously lubricated surface that any viscous material can slide right down. The company uses a thermodynamic algorithm to determine the combinations of safe solids and liquids depending on the product, whether it’s toothpaste or paint.

    The company has built a robotic spraying system that can treat large vats and tanks at manufacturing plants on site. In addition to saving companies millions of dollars in wasted product, LiquiGlide drastically reduces the amount of water needed to regularly clean these containers, which normally have product stuck to the sides.

    “Normally when you empty everything out of a tank, you still have residue that needs to be cleaned with a tremendous amount of water. In agrochemicals, for example, there are strict regulations about how to deal with the resulting wastewater, which is toxic. All of that can be eliminated with LiquiGlide,” says Varanasi.

    While the closure of many manufacturing facilities early in the pandemic slowed down the rollout of CleanTanX pilots at plants, things have picked up in recent months. As manufacturing ramps up both globally and domestically, Varanasi sees a growing need for LiquiGlide’s technologies, especially for liquids like semiconductor slurry.

    Companies like Gradiant, Via Separations, VulcanForms, and LiquiGlide demonstrate that an expansion in manufacturing industries does not need to come at a steep environmental cost. It is possible for manufacturing to be scaled up in a sustainable way.

    “Manufacturing has always been the backbone of what we do as mechanical engineers. At MIT in particular, there is always a drive to make manufacturing sustainable,” says Evelyn Wang, Ford Professor of Engineering and former head of the Department of Mechanical Engineering. “It’s amazing to see how startups that have an origin in our department are looking at every aspect of the manufacturing process and figuring out how to improve it for the health of our planet.”

    As legislation like the CHIPS and Science Act fuels growth in manufacturing, there will be an increased need for startups and companies that develop solutions to mitigate the environmental impact, bringing us closer to a more sustainable future. More

  • in

    New nanosatellite tests autonomy in space

    In May 2022, a SpaceX Falcon 9 rocket launched the Transporter-5 mission into orbit. The mission contained a collection of micro and nanosatellites from both industry and government, including one from MIT Lincoln Laboratory called the Agile MicroSat (AMS).

    AMS’s primary mission is to test automated maneuvering capabilities in the tumultuous very low-Earth orbit (VLEO) environment, starting at 525 kilometers above the surface and lowering down. VLEO is a challenging location for satellites because the higher air density, coupled with variable space weather, causes increased and unpredictable drag that requires frequent maneuvers to maintain position. Using a commercial off-the-shelf electric-ion propulsion system and custom algorithms, AMS is testing how well it can execute automated navigation and control over an initial mission period of six months.

    “AMS integrates electric propulsion and autonomous navigation and guidance control algorithms that push a lot of the operation of the thruster onto the spacecraft — somewhat like a self-driving car,” says Andrew Stimac, who is the principal investigator for the AMS program and the leader of the laboratory’s Integrated Systems and Concepts Group.

    Stimac sees AMS as a kind of pathfinder mission for the field of small satellite autonomy. Autonomy is essential to support the growing number of small satellite launches for industry and science because it can reduce the cost and labor needed to maintain them, enable missions that call for quick and impromptu responses, and help to avoid collisions in an already-crowded sky.

    AMS is the first-ever test of a nanosatellite with this type of automated maneuvering capability.

    AMS uses an electric propulsion thruster that was selected to meet the size and power constraints of a nanosatellite while providing enough thrust and endurance to enable multiyear missions that operate in VLEO. The flight software, called the Bus Hosted Onboard Software Suite, was designed to autonomously operate the thruster to change the spacecraft’s orbit. Operators on the ground can give AMS a high-level command, such as to descend to and maintain a 300-kilometer orbit, and the software will schedule thruster burns to achieve that command autonomously, using measurements from the onboard GPS receiver as feedback. This experimental software is separate from the bus flight software, which allows AMS to safely test its novel algorithms without endangering the spacecraft.

    “One of the enablers for AMS is the way in which we’ve created this software sandbox onboard the spacecraft,” says Robert Legge, who is another member of the AMS team. “We have our own hosted software that’s running on the primary flight computer, but it’s separate from the critical health and safety avionics software. Basically, you can view this as being a little development environment on the spacecraft where we can test out different algorithms.”

    AMS has two secondary missions called Camera and Beacon. Camera’s mission is to take photos and short video clips of the Earth’s surface while AMS is in different low-Earth orbit positions.

    “One of the things we’re hoping to demonstrate is the ability to respond to current events,” says Rebecca Keenan, who helped to prepare the Camera payload. “We could hear about something that happened, like a fire or flood, and then respond pretty quickly to maneuver the satellite to image it.”

    Keenan and the rest of the AMS team are collaborating with the laboratory’s DisasterSat program, which aims to improve satellite image processing pipelines to help relief agencies respond to disasters more quickly. Small satellites that could schedule operations on-demand, rather than planning them months in advance before launch, could be a great asset to disaster response efforts.

    The other payload, Beacon, is testing new adaptive optics capabilities for tracking fast-moving targets by sending laser light from the moving satellite to a ground station at the laboratory’s Haystack Observatory in Westford, Massachusetts. Enabling precise laser pointing from an agile satellite could aid many different types of space missions, such as communications and tracking space debris. It could also be used for emerging programs such as Breakthrough Starshot, which is developing a satellite that can accelerate to high speeds using a laser-propelled lightsail.

    “As far as we know, this is the first on-orbit artificial guide star that has launched for a dedicated adaptive optics purpose,” says Lulu Liu, who worked on the Beacon payload. “Theoretically, the laser it carries can be maneuvered into position on other spacecraft to support a large number of science missions in different regions of the sky.”

    The team developed Beacon with a strict budget and timeline and hope that its success will shorten the design and test loop of next-generation laser transmitter systems. “The idea is that we could have a number of these flying in the sky at once, and a ground system can point to one of them and get near-real-time feedback on its performance,” says Liu.

    AMS weighs under 12 kilograms with 6U dimensions (23 x 11 x 36 centimeters). The bus was designed by Blue Canyon Technologies and the thruster was designed by Enpulsion GmbH.

    Legge says that the AMS program was approached as an opportunity for Lincoln Laboratory to showcase its ability to conduct work in the space domain quickly and flexibly. Some major roadblocks to rapid development of new space technology have been long timelines, high costs, and the extremely low risk tolerance associated with traditional space programs. “We wanted to show that we can really do rapid prototyping and testing of space hardware and software on orbit at an affordable cost,” Legge says.

    “AMS shows the value and fast time-to-orbit afforded by teaming with rapid space commercial partners for spacecraft core bus technologies and launch and ground segment operations, while allowing the laboratory to focus on innovative mission concepts, advanced components and payloads, and algorithms and processing software,” says Dan Cousins, who is the program manager for AMS. “The AMS team appreciates the support from the laboratory’s Technology Office for allowing us to showcase an effective operating model for rapid space programs.”

    AMS took its first image on June 1, completed its thruster commissioning in July, and has begun to descend toward its target VLEO position. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More