More stories

  • in

    The curse of variety in transportation systems

    Cathy Wu has always delighted in systems that run smoothly. In high school, she designed a project to optimize the best route for getting to class on time. Her research interests and career track are evidence of a propensity for organizing and optimizing, coupled with a strong sense of responsibility to contribute to society instilled by her parents at a young age.

    As an undergraduate at MIT, Wu explored domains like agriculture, energy, and education, eventually homing in on transportation. “Transportation touches each of our lives,” she says. “Every day, we experience the inefficiencies and safety issues as well as the environmental harms associated with our transportation systems. I believe we can and should do better.”

    But doing so is complicated. Consider the long-standing issue of traffic systems control. Wu explains that it is not one problem, but more accurately a family of control problems impacted by variables like time of day, weather, and vehicle type — not to mention the types of sensing and communication technologies used to measure roadway information. Every differentiating factor introduces an exponentially larger set of control problems. There are thousands of control-problem variations and hundreds, if not thousands, of studies and papers dedicated to each problem. Wu refers to the sheer number of variations as the curse of variety — and it is hindering innovation.

    Play video

    “To prove that a new control strategy can be safely deployed on our streets can take years. As time lags, we lose opportunities to improve safety and equity while mitigating environmental impacts. Accelerating this process has huge potential,” says Wu.  

    Which is why she and her group in the MIT Laboratory for Information and Decision Systems are devising machine learning-based methods to solve not just a single control problem or a single optimization problem, but families of control and optimization problems at scale. “In our case, we’re examining emerging transportation problems that people have spent decades trying to solve with classical approaches. It seems to me that we need a different approach.”

    Optimizing intersections

    Currently, Wu’s largest research endeavor is called Project Greenwave. There are many sectors that directly contribute to climate change, but transportation is responsible for the largest share of greenhouse gas emissions — 29 percent, of which 81 percent is due to land transportation. And while much of the conversation around mitigating environmental impacts related to mobility is focused on electric vehicles (EVs), electrification has its drawbacks. EV fleet turnover is time-consuming (“on the order of decades,” says Wu), and limited global access to the technology presents a significant barrier to widespread adoption.

    Wu’s research, on the other hand, addresses traffic control problems by leveraging deep reinforcement learning. Specifically, she is looking at traffic intersections — and for good reason. In the United States alone, there are more than 300,000 signalized intersections where vehicles must stop or slow down before re-accelerating. And every re-acceleration burns fossil fuels and contributes to greenhouse gas emissions.

    Highlighting the magnitude of the issue, Wu says, “We have done preliminary analysis indicating that up to 15 percent of land transportation CO2 is wasted through energy spent idling and re-accelerating at intersections.”

    To date, she and her group have modeled 30,000 different intersections across 10 major metropolitan areas in the United States. That is 30,000 different configurations, roadway topologies (e.g., grade of road or elevation), different weather conditions, and variations in travel demand and fuel mix. Each intersection and its corresponding scenarios represents a unique multi-agent control problem.

    Wu and her team are devising techniques that can solve not just one, but a whole family of problems comprised of tens of thousands of scenarios. Put simply, the idea is to coordinate the timing of vehicles so they arrive at intersections when traffic lights are green, thereby eliminating the start, stop, re-accelerate conundrum. Along the way, they are building an ecosystem of tools, datasets, and methods to enable roadway interventions and impact assessments of strategies to significantly reduce carbon-intense urban driving.

    Play video

    Their collaborator on the project is the Utah Department of Transportation, which Wu says has played an essential role, in part by sharing data and practical knowledge that she and her group otherwise would not have been able to access publicly.

    “I appreciate industry and public sector collaborations,” says Wu. “When it comes to important societal problems, one really needs grounding with practitioners. One needs to be able to hear the perspectives in the field. My interactions with practitioners expand my horizons and help ground my research. You never know when you’ll hear the perspective that is the key to the solution, or perhaps the key to understanding the problem.”

    Finding the best routes

    In a similar vein, she and her research group are tackling large coordination problems. For example, vehicle routing. “Every day, delivery trucks route more than a hundred thousand packages for the city of Boston alone,” says Wu. Accomplishing the task requires, among other things, figuring out which trucks to use, which packages to deliver, and the order in which to deliver them as efficiently as possible. If and when the trucks are electrified, they will need to be charged, adding another wrinkle to the process and further complicating route optimization.

    The vehicle routing problem, and therefore the scope of Wu’s work, extends beyond truck routing for package delivery. Ride-hailing cars may need to pick up objects as well as drop them off; and what if delivery is done by bicycle or drone? In partnership with Amazon, for example, Wu and her team addressed routing and path planning for hundreds of robots (up to 800) in their warehouses.

    Every variation requires custom heuristics that are expensive and time-consuming to develop. Again, this is really a family of problems — each one complicated, time-consuming, and currently unsolved by classical techniques — and they are all variations of a central routing problem. The curse of variety meets operations and logistics.

    By combining classical approaches with modern deep-learning methods, Wu is looking for a way to automatically identify heuristics that can effectively solve all of these vehicle routing problems. So far, her approach has proved successful.

    “We’ve contributed hybrid learning approaches that take existing solution methods for small problems and incorporate them into our learning framework to scale and accelerate that existing solver for large problems. And we’re able to do this in a way that can automatically identify heuristics for specialized variations of the vehicle routing problem.” The next step, says Wu, is applying a similar approach to multi-agent robotics problems in automated warehouses.

    Wu and her group are making big strides, in part due to their dedication to use-inspired basic research. Rather than applying known methods or science to a problem, they develop new methods, new science, to address problems. The methods she and her team employ are necessitated by societal problems with practical implications. The inspiration for the approach? None other than Louis Pasteur, who described his research style in a now-famous article titled “Pasteur’s Quadrant.” Anthrax was decimating the sheep population, and Pasteur wanted to better understand why and what could be done about it. The tools of the time could not solve the problem, so he invented a new field, microbiology, not out of curiosity but out of necessity. More

  • in

    Cutting urban carbon emissions by retrofitting buildings

    To support the worldwide struggle to reduce carbon emissions, many cities have made public pledges to cut their carbon emissions in half by 2030, and some have promised to be carbon neutral by 2050. Buildings can be responsible for more than half a municipality’s carbon emissions. Today, new buildings are typically designed in ways that minimize energy use and carbon emissions. So attention focuses on cleaning up existing buildings.

    A decade ago, leaders in some cities took the first step in that process: They quantified their problem. Based on data from their utilities on natural gas and electricity consumption and standard pollutant-emission rates, they calculated how much carbon came from their buildings. They then adopted policies to encourage retrofits, such as adding insulation, switching to double-glazed windows, or installing rooftop solar panels. But will those steps be enough to meet their pledges?

    “In nearly all cases, cities have no clear plan for how they’re going to reach their goal,” says Christoph Reinhart, a professor in the Department of Architecture and director of the Building Technology Program. “That’s where our work comes in. We aim to help them perform analyses so they can say, ‘If we, as a community, do A, B, and C to buildings of a certain type within our jurisdiction, then we are going to get there.’”

    To support those analyses, Reinhart and a team in the MIT Sustainable Design Lab (SDL) — PhD candidate Zachary M. Berzolla SM ’21; former doctoral student Yu Qian Ang PhD ’22, now a research collaborator at the SDL; and former postdoc Samuel Letellier-Duchesne, now a senior building performance analyst at the international building engineering and consulting firm Introba — launched a publicly accessible website providing a series of simulation tools and a process for using them to determine the impacts of planned steps on a specific building stock. Says Reinhart: “The takeaway can be a clear technology pathway — a combination of building upgrades, renewable energy deployments, and other measures that will enable a community to reach its carbon-reduction goals for their built environment.”

    Analyses performed in collaboration with policymakers from selected cities around the world yielded insights demonstrating that reaching current goals will require more effort than city representatives and — in a few cases — even the research team had anticipated.

    Exploring carbon-reduction pathways

    The researchers’ approach builds on a physics-based “building energy model,” or BEM, akin to those that architects use to design high-performance green buildings. In 2013, Reinhart and his team developed a method of extending that concept to analyze a cluster of buildings. Based on publicly available geographic information system (GIS) data, including each building’s type, footprint, and year of construction, the method defines the neighborhood — including trees, parks, and so on — and then, using meteorological data, how the buildings will interact, the airflows among them, and their energy use. The result is an “urban building energy model,” or UBEM, for a neighborhood or a whole city.

    The website developed by the MIT team enables neighborhoods and cities to develop their own UBEM and to use it to calculate their current building energy use and resulting carbon emissions, and then how those outcomes would change assuming different retrofit programs or other measures being implemented or considered. “The website — UBEM.io — provides step-by-step instructions and all the simulation tools that a team will need to perform an analysis,” says Reinhart.

    The website starts by describing three roles required to perform an analysis: a local sustainability champion who is familiar with the municipality’s carbon-reduction efforts; a GIS manager who has access to the municipality’s urban datasets and maintains a digital model of the built environment; and an energy modeler — typically a hired consultant — who has a background in green building consulting and individual building energy modeling.

    The team begins by defining “shallow” and “deep” building retrofit scenarios. To explain, Reinhart offers some examples: “‘Shallow’ refers to things that just happen, like when you replace your old, failing appliances with new, energy-efficient ones, or you install LED light bulbs and weatherstripping everywhere,” he says. “‘Deep’ adds to that list things you might do only every 20 years, such as ripping out walls and putting in insulation or replacing your gas furnace with an electric heat pump.”

    Once those scenarios are defined, the GIS manager uploads to UBEM.io a dataset of information about the city’s buildings, including their locations and attributes such as geometry, height, age, and use (e.g., commercial, retail, residential). The energy modeler then builds a UBEM to calculate the energy use and carbon emissions of the existing building stock. Once that baseline is established, the energy modeler can calculate how specific retrofit measures will change the outcomes.

    Workshop to test-drive the method

    Two years ago, the MIT team set up a three-day workshop to test the website with sample users. Participants included policymakers from eight cities and municipalities around the world: namely, Braga (Portugal), Cairo (Egypt), Dublin (Ireland), Florianopolis (Brazil), Kiel (Germany), Middlebury (Vermont, United States), Montreal (Canada), and Singapore. Taken together, the cities represent a wide range of climates, socioeconomic demographics, cultures, governing structures, and sizes.

    Working with the MIT team, the participants presented their goals, defined shallow- and deep-retrofit scenarios for their city, and selected a limited but representative area for analysis — an approach that would speed up analyses of different options while also generating results valid for the city as a whole.

    They then performed analyses to quantify the impacts of their retrofit scenarios. Finally, they learned how best to present their findings — a critical part of the exercise. “When you do this analysis and bring it back to the people, you can say, ‘This is our homework over the next 30 years. If we do this, we’re going to get there,’” says Reinhart. “That makes you part of the community, so it’s a joint goal.”

    Sample results

    After the close of the workshop, Reinhart and his team confirmed their findings for each city and then added one more factor to the analyses: the state of the city’s electric grid. Several cities in the study had pledged to make their grid carbon-neutral by 2050. Including the grid in the analysis was therefore critical: If a building becomes all-electric and purchases its electricity from a carbon-free grid, then that building will be carbon neutral — even with no on-site energy-saving retrofits.

    The final analysis for each city therefore calculated the total kilograms of carbon dioxide equivalent emitted per square meter of floor space assuming the following scenarios: the baseline; shallow retrofit only; shallow retrofit plus a clean electricity grid; deep retrofit only; deep retrofit plus rooftop photovoltaic solar panels; and deep retrofit plus a clean electricity grid. (Note that “clean electricity grid” is based on the area’s most ambitious decarbonization target for their power grid.)

    The following paragraphs provide highlights of the analyses for three of the eight cities. Included are the city’s setting, emission-reduction goals, current and proposed measures, and calculations of how implementation of those measures would affect their energy use and carbon emissions.

    Singapore

    Singapore is generally hot and humid, and its building energy use is largely in the form of electricity for cooling. The city is dominated by high-rise buildings, so there’s not much space for rooftop solar installations to generate the needed electricity. Therefore, plans for decarbonizing the current building stock must involve retrofits. The shallow-retrofit scenario focuses on installing energy-efficient lighting and appliances. To those steps, the deep-retrofit scenario adds adopting a district cooling system. Singapore’s stated goals are to cut the baseline carbon emissions by about a third by 2030 and to cut it in half by 2050.

    The analysis shows that, with just the shallow retrofits, Singapore won’t achieve its 2030 goal. But with the deep retrofits, it should come close. Notably, decarbonizing the electric grid would enable Singapore to meet and substantially exceed its 2050 target assuming either retrofit scenario.

    Dublin

    Dublin has a mild climate with relatively comfortable summers but cold, humid winters. As a result, the city’s energy use is dominated by fossil fuels, in particular, natural gas for space heating and domestic hot water. The city presented just one target — a 40 percent reduction by 2030.

    Dublin has many neighborhoods made up of Georgian row houses, and, at the time of the workshop, the city already had a program in place encouraging groups of owners to insulate their walls. The shallow-retrofit scenario therefore focuses on weatherization upgrades (adding weatherstripping to windows and doors, insulating crawlspaces, and so on). To that list, the deep-retrofit scenario adds insulating walls and installing upgraded windows. The participants didn’t include electric heat pumps, as the city was then assessing the feasibility of expanding the existing district heating system.

    Results of the analyses show that implementing the shallow-retrofit scenario won’t enable Dublin to meet its 2030 target. But the deep-retrofit scenario will. However, like Singapore, Dublin could make major gains by decarbonizing its electric grid. The analysis shows that a decarbonized grid — with or without the addition of rooftop solar panels where possible — could more than halve the carbon emissions that remain in the deep-retrofit scenario. Indeed, a decarbonized grid plus electrification of the heating system by incorporating heat pumps could enable Dublin to meet a future net-zero target.

    Middlebury

    Middlebury, Vermont, has warm, wet summers and frigid winters. Like Dublin, its energy demand is dominated by natural gas for heating. But unlike Dublin, it already has a largely decarbonized electric grid with a high penetration of renewables.

    For the analysis, the Middlebury team chose to focus on an aging residential neighborhood similar to many that surround the city core. The shallow-retrofit scenario calls for installing heat pumps for space heating, and the deep-retrofit scenario adds improvements in building envelopes (the façade, roof, and windows). The town’s targets are a 40 percent reduction from the baseline by 2030 and net-zero carbon by 2050.

    Results of the analyses showed that implementing the shallow-retrofit scenario won’t achieve the 2030 target. The deep-retrofit scenario would get the city to the 2030 target but not to the 2050 target. Indeed, even with the deep retrofits, fossil fuel use remains high. The explanation? While both retrofit scenarios call for installing heat pumps for space heating, the city would continue to use natural gas to heat its hot water.

    Lessons learned

    For several policymakers, seeing the results of their analyses was a wake-up call. They learned that the strategies they had planned might not be sufficient to meet their stated goals — an outcome that could prove publicly embarrassing for them in the future.

    Like the policymakers, the researchers learned from the experience. Reinhart notes three main takeaways.

    First, he and his team were surprised to find how much of a building’s energy use and carbon emissions can be traced to domestic hot water. With Middlebury, for example, even switching from natural gas to heat pumps for space heating didn’t yield the expected effect: On the bar graphs generated by their analyses, the gray bars indicating carbon from fossil fuel use remained. As Reinhart recalls, “I kept saying, ‘What’s all this gray?’” While the policymakers talked about using heat pumps, they were still going to use natural gas to heat their hot water. “It’s just stunning that hot water is such a big-ticket item. It’s huge,” says Reinhart.

    Second, the results demonstrate the importance of including the state of the local electric grid in this type of analysis. “Looking at the results, it’s clear that if we want to have a successful energy transition, the building sector and the electric grid sector both have to do their homework,” notes Reinhart. Moreover, in many cases, reaching carbon neutrality by 2050 would require not only a carbon-free grid but also all-electric buildings.

    Third, Reinhart was struck by how different the bar graphs presenting results for the eight cities look. “This really celebrates the uniqueness of different parts of the world,” he says. “The physics used in the analysis is the same everywhere, but differences in the climate, the building stock, construction practices, electric grids, and other factors make the consequences of making the same change vary widely.”

    In addition, says Reinhart, “there are sometimes deeply ingrained conflicts of interest and cultural norms, which is why you cannot just say everybody should do this and do this.” For instance, in one case, the city owned both the utility and the natural gas it burned. As a result, the policymakers didn’t consider putting in heat pumps because “the natural gas was a significant source of municipal income, and they didn’t want to give that up,” explains Reinhart.

    Finally, the analyses quantified two other important measures: energy use and “peak load,” which is the maximum electricity demanded from the grid over a specific time period. Reinhart says that energy use “is probably mostly a plausibility check. Does this make sense?” And peak load is important because the utilities need to keep a stable grid.

    Middlebury’s analysis provides an interesting look at how certain measures could influence peak electricity demand. There, the introduction of electric heat pumps for space heating more than doubles the peak demand from buildings, suggesting that substantial additional capacity would have to be added to the grid in that region. But when heat pumps are combined with other retrofitting measures, the peak demand drops to levels lower than the starting baseline.

    The aftermath: An update

    Reinhart stresses that the specific results from the workshop provide just a snapshot in time; that is, where the cities were at the time of the workshop. “This is not the fate of the city,” he says. “If we were to do the same exercise today, we’d no doubt see a change in thinking, and the outcomes would be different.”

    For example, heat pumps are now familiar technology and have demonstrated their ability to handle even bitterly cold climates. And in some regions, they’ve become economically attractive, as the war in Ukraine has made natural gas both scarce and expensive. Also, there’s now awareness of the need to deal with hot water production.

    Reinhart notes that performing the analyses at the workshop did have the intended impact: It brought about change. Two years after the project had ended, most of the cities reported that they had implemented new policy measures or had expanded their analysis across their entire building stock. “That’s exactly what we want,” comments Reinhart. “This is not an academic exercise. It’s meant to change what people focus on and what they do.”

    Designing policies with socioeconomics in mind

    Reinhart notes a key limitation of the UBEM.io approach: It looks only at technical feasibility. But will the building owners be willing and able to make the energy-saving retrofits? Data show that — even with today’s incentive programs and subsidies — current adoption rates are only about 1 percent. “That’s way too low to enable a city to achieve its emission-reduction goals in 30 years,” says Reinhart. “We need to take into account the socioeconomic realities of the residents to design policies that are both effective and equitable.”

    To that end, the MIT team extended their UBEM.io approach to create a socio-techno-economic analysis framework that can predict the rate of retrofit adoption throughout a city. Based on census data, the framework creates a UBEM that includes demographics for the specific types of buildings in a city. Accounting for the cost of making a specific retrofit plus financial benefits from policy incentives and future energy savings, the model determines the economic viability of the retrofit package for representative households.

    Sample analyses for two Boston neighborhoods suggest that high-income households are largely ineligible for need-based incentives or the incentives are insufficient to prompt action. Lower-income households are eligible and could benefit financially over time, but they don’t act, perhaps due to limited access to information, a lack of time or capital, or a variety of other reasons.

    Reinhart notes that their work thus far “is mainly looking at technical feasibility. Next steps are to better understand occupants’ willingness to pay, and then to determine what set of federal and local incentive programs will trigger households across the demographic spectrum to retrofit their apartments and houses, helping the worldwide effort to reduce carbon emissions.”

    This work was supported by Shell through the MIT Energy Initiative. Zachary Berzolla was supported by the U.S. National Science Foundation Graduate Research Fellowship. Samuel Letellier-Duchesne was supported by the postdoctoral fellowship of the Natural Sciences and Engineering Research Council of Canada.

    This article appears in the Spring 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Tackling the MIT campus’s top energy consumers, building by building

    When staff in MIT’s Department of Facilities would visualize energy use and carbon-associated emissions by campus buildings, Building 46 always stood out — attributed to its energy intensity, which accounted for 8 percent of MIT’s total campus energy use. This high energy draw was not surprising, as the building is home of the Brain and Cognitive Sciences Complex and a large amount of lab space, but it also made the building a perfect candidate for an energy performance audit to seek out potential energy saving opportunities.

    This audit revealed that several energy efficiency updates to the building mechanical systems infrastructure, including optimization of the room-by-room ventilation rates, could result in an estimated 35 percent reduction of energy use, which would in turn lower MIT’s total greenhouse gas emissions by an estimated 2 percent — driving toward the Institute’s 2026 goal of net-zero and 2050 goal of elimination of direct campus emissions.

    Building energy efficiency projects are not new for MIT. Since 2010, MIT has been engaged in a partnership agreement with utility company Eversource establishing the Efficiency Forward program, empowering MIT to invest in more than 300 energy conservation projects to date and lowering energy consumption on campus for a total calculated savings of approximately 70 million kilowatt hours and 4.2 million therms. But at 418,000 gross square feet, Building 46 is the first energy efficiency project of its size on the campus.

    “We’ve never tackled a whole building like this — it’s the first capital project that is technically an energy project,” explains Siobhan Carr, energy efficiency program manager, who was part of the team overseeing the energy audit and lab ventilation performance assessment in the building. “That gives you an idea of the magnitude and complexity of this.”

    The project started with the full building energy assessment and lab ventilation risk audit. “We had a team go through every corner of the building and look at every possible opportunity to save energy,” explains Jessica Parks, senior project manager for systems performance and turnover in campus construction. “One of the biggest issues we saw was that there’s a lot of dry lab spaces which are basically offices, but they’re all getting the same ventilation as if they were a high-intensity lab.” Higher ventilation and more frequent air exchange rates draw more energy. By optimizing for the required ventilation rates, there was an opportunity to save energy in nearly every space in the building.

    In addition to the optimized ventilation, the project team will convert fume hoods from constant volume to variable volume and install equipment to help the building systems run more efficiently. The team also identified opportunities to work with labs to implement programs such as fume hood hibernation and unoccupied setbacks for temperature and ventilation. As different spaces in the building have varying needs, the energy retrofit will touch all 1,254 spaces in the building — one by one — to implement the different energy measures to reach that estimated 35 percent reduction in energy use.

    Although time-consuming and complex, this room-by-room approach has a big benefit in that it has allowed research to continue in the space largely uninterrupted. With a few exceptions, the occupants of Building 46, which include the Department of Brain and Cognitive Sciences, The McGovern Institute for Brain Research, and The Picower Institute for Learning and Memory, have remained in place for the duration of the project. Partners in the MIT Environment, Health and Safety Office are instrumental to this balance of renovations and keeping the building operational during the optimization efforts and are one of several teams across MIT contributing to building efficiency efforts.

    The completion date of the building efficiency project is set for 2024, but Carr says that some of the impact of this ongoing work may soon be seen. “We should start to see savings as we move through the building, and we expect to fully realize all of our projected savings a year after completion,” she says, noting that the length of time is required for a year-over-year perspective to see the full reduction in energy use.

    The impact of the project goes far beyond the footprint of Building 46 as it offers insights and spurred actions for future projects — including buildings 76 and 68, the number two and three top energy users on campus. Both buildings recently underwent their own energy audits and lab ventilation performance assessments. The energy efficiency team is now crafting a plan for full-building approaches, much like Building 46. “To date, 46 has presented many learning opportunities, such as how to touch every space in a building while research continues, as well as how to overcome challenges encountered when working on existing systems,” explains Parks. “The good news is that we have developed solutions for those challenges and the teams have been proactively implementing those lessons in our other projects.”

    Communication has proven to be another key for these large projects where occupants see the work happening and often play a role in answering questions about their unique space. “People are really engaged, they ask questions about the work, and we ask them about the space they’re in every day,” says Parks. “The Building 46 occupants have been wonderful partners as we worked in all of their spaces, which is paving the way for a successful project.”

    The release of Fast Forward in 2021 has also made communications easier, notes Carr, who says the plan helps to frame these projects as part of the big picture — not just a construction interruption. “Fast Forward has brought a visibility into what we’re doing within [MIT] Facilities on these buildings,” she says. “It brings more eyes and ears, and people understand that these projects are happening throughout campus and not just in their own space — we’re all working to reduce energy and to reduce greenhouse gas across campus.”

    The Energy Efficiency team will continue to apply that big-picture approach as ongoing building efficiency projects on campus are assessed to reach toward a 10 to 15 percent reduction in energy use and corresponding emissions over the next several years. More

  • in

    Finding “hot spots” where compounding environmental and economic risks converge

    A computational tool developed by researchers at the MIT Joint Program on the Science and Policy of Global Change pinpoints specific counties within the United States that are particularly vulnerable to economic distress resulting from a transition from fossil fuels to low-carbon energy sources. By combining county-level data on employment in fossil fuel (oil, natural gas, and coal) industries with data on populations below the poverty level, the tool identifies locations with high risks for transition-driven economic hardship. It turns out that many of these high-risk counties are in the south-central U.S., with a heavy concentration in the lower portions of the Mississippi River.

    The computational tool, which the researchers call the System for the Triage of Risks from Environmental and Socio-economic Stressors (STRESS) platform, almost instantly displays these risk combinations on an easy-to-read visual map, revealing those counties that stand to gain the most from targeted green jobs retraining programs.  

    Drawing on data that characterize land, water, and energy systems; biodiversity; demographics; environmental equity; and transportation networks, the STRESS platform enables users to assess multiple, co-evolving, compounding hazards within a U.S. geographical region from the national to the county level. Because of its comprehensiveness and precision, this screening-level visualization tool can pinpoint risk “hot spots” that can be subsequently investigated in greater detail. Decision-makers can then plan targeted interventions to boost resilience to location-specific physical and economic risks.

    The platform and its applications are highlighted in a new study in the journal Frontiers in Climate.

    “As risks to natural and managed resources — and to the economies that depend upon them — become more complex, interdependent, and compounding amid rapid environmental and societal changes, they require more and more human and computational resources to understand and act upon,” says MIT Joint Program Deputy Director C. Adam Schlosser, the lead author of the study. “The STRESS platform provides decision-makers with an efficient way to combine and analyze data on those risks that matter most to them, identify ‘hot spots’ of compounding risk, and design interventions to minimize that risk.”

    In one demonstration of the STRESS platform’s capabilities, the study shows that national and global actions to reduce greenhouse gas emissions could simultaneously reduce risks to land, water, and air quality in the upper Mississippi River basin while increasing economic risks in the lower basin, where poverty and unemployment are already disproportionate. In another demonstration, the platform finds concerning “hot spots” where flood risk, poverty, and nonwhite populations coincide.

    The risk triage platform is based on an emerging discipline called multi-sector dynamics (MSD), which seeks to understand and model compounding risks and potential tipping points across interconnected natural and human systems. Tipping points occur when these systems can no longer sustain multiple, co-evolving stresses, such as extreme events, population growth, land degradation, drinkable water shortages, air pollution, aging infrastructure, and increased human demands. MSD researchers use observations and computer models to identify key precursory indicators of such tipping points, providing decision-makers with critical information that can be applied to mitigate risks and boost resilience in natural and managed resources. With funding from the U.S. Department of Energy, the MIT Joint Program has since 2018 been developing MSD expertise and modeling tools and using them to explore compounding risks and potential tipping points in selected regions of the United States.

    Current STRESS platform data includes more than 100 risk metrics at the county-level scale, but data collection is ongoing. MIT Joint Program researchers are continuing to develop the STRESS platform as an “open-science tool” that welcomes input from academics, researchers, industry and the general public. More

  • in

    Inaugural J-WAFS Grand Challenge aims to develop enhanced crop variants and move them from lab to land

    According to MIT’s charter, established in 1861, part of the Institute’s mission is to advance the “development and practical application of science in connection with arts, agriculture, manufactures, and commerce.” Today, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) is one of the driving forces behind water and food-related research on campus, much of which relates to agriculture. In 2022, J-WAFS established the Water and Food Grand Challenge Grant to inspire MIT researchers to work toward a water-secure and food-secure future for our changing planet. Not unlike MIT’s Climate Grand Challenges, the J-WAFS Grand Challenge seeks to leverage multiple areas of expertise, programs, and Institute resources. The initial call for statements of interests returned 23 letters from MIT researchers spanning 18 departments, labs, and centers. J-WAFS hosted workshops for the proposers to present and discuss their initial ideas. These were winnowed down to a smaller set of invited concept papers, followed by the final proposal stage. 

    Today, J-WAFS is delighted to report that the inaugural J-WAFS Grand Challenge Grant has been awarded to a team of researchers led by Professor Matt Shoulders and research scientist Robert Wilson of the Department of Chemistry. A panel of expert, external reviewers highly endorsed their proposal, which tackles a longstanding problem in crop biology — how to make photosynthesis more efficient. The team will receive $1.5 million over three years to facilitate a multistage research project that combines cutting-edge innovations in synthetic and computational biology. If successful, this project could create major benefits for agriculture and food systems worldwide.

    “Food systems are a major source of global greenhouse gas emissions, and they are also increasingly vulnerable to the impacts of climate change. That’s why when we talk about climate change, we have to talk about food systems, and vice versa,” says Maria T. Zuber, MIT’s vice president for research. “J-WAFS is central to MIT’s efforts to address the interlocking challenges of climate, water, and food. This new grant program aims to catalyze innovative projects that will have real and meaningful impacts on water and food. I congratulate Professor Shoulders and the rest of the research team on being the inaugural recipients of this grant.”

    Shoulders will work with Bryan Bryson, associate professor of biological engineering, as well as Bin Zhang, associate professor of chemistry, and Mary Gehring, a professor in the Department of Biology and the Whitehead Institute for Biomedical Research. Robert Wilson from the Shoulders lab will be coordinating the research effort. The team at MIT will work with outside collaborators Spencer Whitney, a professor from the Australian National University, and Ahmed Badran, an assistant professor at the Scripps Research Institute. A milestone-based collaboration will also take place with Stephen Long, a professor from the University of Illinois at Urbana-Champaign. The group consists of experts in continuous directed evolution, machine learning, molecular dynamics simulations, translational plant biochemistry, and field trials.

    “This project seeks to fundamentally improve the RuBisCO enzyme that plants use to convert carbon dioxide into the energy-rich molecules that constitute our food,” says J-WAFS Director John H. Lienhard V. “This difficult problem is a true grand challenge, calling for extensive resources. With J-WAFS’ support, this long-sought goal may finally be achieved through MIT’s leading-edge research,” he adds.

    RuBisCO: No, it’s not a new breakfast cereal; it just might be the key to an agricultural revolution

    A growing global population, the effects of climate change, and social and political conflicts like the war in Ukraine are all threatening food supplies, particularly grain crops. Current projections estimate that crop production must increase by at least 50 percent over the next 30 years to meet food demands. One key barrier to increased crop yields is a photosynthetic enzyme called Ribulose-1,5-Bisphosphate Carboxylase/Oxygenase (RuBisCO). During photosynthesis, crops use energy gathered from light to draw carbon dioxide (CO2) from the atmosphere and transform it into sugars and cellulose for growth, a process known as carbon fixation. RuBisCO is essential for capturing the CO2 from the air to initiate conversion of CO2 into energy-rich molecules like glucose. This reaction occurs during the second stage of photosynthesis, also known as the Calvin cycle. Without RuBisCO, the chemical reactions that account for virtually all carbon acquisition in life could not occur.

    Unfortunately, RuBisCO has biochemical shortcomings. Notably, the enzyme acts slowly. Many other enzymes can process a thousand molecules per second, but RuBisCO in chloroplasts fixes less than six carbon dioxide molecules per second, often limiting the rate of plant photosynthesis. Another problem is that oxygen (O2) molecules and carbon dioxide molecules are relatively similar in shape and chemical properties, and RuBisCO is unable to fully discriminate between the two. The inadvertent fixation of oxygen by RuBisCO leads to energy and carbon loss. What’s more, at higher temperatures RuBisCO reacts even more frequently with oxygen, which will contribute to decreased photosynthetic efficiency in many staple crops as our climate warms.

    The scientific consensus is that genetic engineering and synthetic biology approaches could revolutionize photosynthesis and offer protection against crop losses. To date, crop RuBisCO engineering has been impaired by technological obstacles that have limited any success in significantly enhancing crop production. Excitingly, genetic engineering and synthetic biology tools are now at a point where they can be applied and tested with the aim of creating crops with new or improved biological pathways for producing more food for the growing population.

    An epic plan for fighting food insecurity

    The 2023 J-WAFS Grand Challenge project will use state-of-the-art, transformative protein engineering techniques drawn from biomedicine to improve the biochemistry of photosynthesis, specifically focusing on RuBisCO. Shoulders and his team are planning to build what they call the Enhanced Photosynthesis in Crops (EPiC) platform. The project will evolve and design better crop RuBisCO in the laboratory, followed by validation of the improved enzymes in plants, ultimately resulting in the deployment of enhanced RuBisCO in field trials to evaluate the impact on crop yield. 

    Several recent developments make high-throughput engineering of crop RuBisCO possible. RuBisCO requires a complex chaperone network for proper assembly and function in plants. Chaperones are like helpers that guide proteins during their maturation process, shielding them from aggregation while coordinating their correct assembly. Wilson and his collaborators previously unlocked the ability to recombinantly produce plant RuBisCO outside of plant chloroplasts by reconstructing this chaperone network in Escherichia coli (E. coli). Whitney has now established that the RuBisCO enzymes from a range of agriculturally relevant crops, including potato, carrot, strawberry, and tobacco, can also be expressed using this technology. Whitney and Wilson have further developed a range of RuBisCO-dependent E. coli screens that can identify improved RuBisCO from complex gene libraries. Moreover, Shoulders and his lab have developed sophisticated in vivo mutagenesis technologies that enable efficient continuous directed evolution campaigns. Continuous directed evolution refers to a protein engineering process that can accelerate the steps of natural evolution simultaneously in an uninterrupted cycle in the lab, allowing for rapid testing of protein sequences. While Shoulders and Badran both have prior experience with cutting-edge directed evolution platforms, this will be the first time directed evolution is applied to RuBisCO from plants.

    Artificial intelligence is changing the way enzyme engineering is undertaken by researchers. Principal investigators Zhang and Bryson will leverage modern computational methods to simulate the dynamics of RuBisCO structure and explore its evolutionary landscape. Specifically, Zhang will use molecular dynamics simulations to simulate and monitor the conformational dynamics of the atoms in a protein and its programmed environment over time. This approach will help the team evaluate the effect of mutations and new chemical functionalities on the properties of RuBisCO. Bryson will employ artificial intelligence and machine learning to search the RuBisCO activity landscape for optimal sequences. The computational and biological arms of the EPiC platform will work together to both validate and inform each other’s approaches to accelerate the overall engineering effort.

    Shoulders and the group will deploy their designed enzymes in tobacco plants to evaluate their effects on growth and yield relative to natural RuBisCO. Gehring, a plant biologist, will assist with screening improved RuBisCO variants using the tobacco variety Nicotiana benthamianaI, where transient expression can be deployed. Transient expression is a speedy approach to test whether novel engineered RuBisCO variants can be correctly synthesized in leaf chloroplasts. Variants that pass this quality-control checkpoint at MIT will be passed to the Whitney Lab at the Australian National University for stable transformation into Nicotiana tabacum (tobacco), enabling robust measurements of photosynthetic improvement. In a final step, Professor Long at the University of Illinois at Urbana-Champaign will perform field trials of the most promising variants.

    Even small improvements could have a big impact

    A common criticism of efforts to improve RuBisCO is that natural evolution has not already identified a better enzyme, possibly implying that none will be found. Traditional views have speculated a catalytic trade-off between RuBisCO’s specificity factor for CO2 / O2 versus its CO2 fixation efficiency, leading to the belief that specificity factor improvements might be offset by even slower carbon fixation or vice versa. This trade-off has been suggested to explain why natural evolution has been slow to achieve a better RuBisCO. But Shoulders and the team are convinced that the EPiC platform can unlock significant overall improvements to plant RuBisCO. This view is supported by the fact that Wilson and Whitney have previously used directed evolution to improve CO2 fixation efficiency by 50 percent in RuBisCO from cyanobacteria (the ancient progenitors of plant chloroplasts) while simultaneously increasing the specificity factor. 

    The EPiC researchers anticipate that their initial variants could yield 20 percent increases in RuBisCO’s specificity factor without impairing other aspects of catalysis. More sophisticated variants could lift RuBisCO out of its evolutionary trap and display attributes not currently observed in nature. “If we achieve anywhere close to such an improvement and it translates to crops, the results could help transform agriculture,” Shoulders says. “If our accomplishments are more modest, it will still recruit massive new investments to this essential field.”

    Successful engineering of RuBisCO would be a scientific feat of its own and ignite renewed enthusiasm for improving plant CO2 fixation. Combined with other advances in photosynthetic engineering, such as improved light usage, a new green revolution in agriculture could be achieved. Long-term impacts of the technology’s success will be measured in improvements to crop yield and grain availability, as well as resilience against yield losses under higher field temperatures. Moreover, improved land productivity together with policy initiatives would assist in reducing the environmental footprint of agriculture. With more “crop per drop,” reductions in water consumption from agriculture would be a major boost to sustainable farming practices.

    “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders adds. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.” More

  • in

    Study: Carbon-neutral pavements are possible by 2050, but rapid policy and industry action are needed

    Almost 2.8 million lane-miles, or about 4.6 million lane-kilometers, of the United States are paved.

    Roads and streets form the backbone of our built environment. They take us to work or school, take goods to their destinations, and much more.

    However, a new study by MIT Concrete Sustainability Hub (CSHub) researchers shows that the annual greenhouse gas (GHG) emissions of all construction materials used in the U.S. pavement network are 11.9 to 13.3 megatons. This is equivalent to the emissions of a gasoline-powered passenger vehicle driving about 30 billion miles in a year.

    As roads are built, repaved, and expanded, new approaches and thoughtful material choices are necessary to dampen their carbon footprint. 

    The CSHub researchers found that, by 2050, mixtures for pavements can be made carbon-neutral if industry and governmental actors help to apply a range of solutions — like carbon capture — to reduce, avoid, and neutralize embodied impacts. (A neutralization solution is any compensation mechanism in the value chain of a product that permanently removes the global warming impact of the processes after avoiding and reducing the emissions.) Furthermore, nearly half of pavement-related greenhouse gas (GHG) savings can be achieved in the short term with a negative or nearly net-zero cost.

    The research team, led by Hessam AzariJafari, MIT CSHub’s deputy director, closed gaps in our understanding of the impacts of pavements decisions by developing a dynamic model quantifying the embodied impact of future pavements materials demand for the U.S. road network. 

    The team first split the U.S. road network into 10-mile (about 16 kilometer) segments, forecasting the condition and performance of each. They then developed a pavement management system model to create benchmarks helping to understand the current level of emissions and the efficacy of different decarbonization strategies. 

    This model considered factors such as annual traffic volume and surface conditions, budget constraints, regional variation in pavement treatment choices, and pavement deterioration. The researchers also used a life-cycle assessment to calculate annual state-level emissions from acquiring pavement construction materials, considering future energy supply and materials procurement.

    The team considered three scenarios for the U.S. pavement network: A business-as-usual scenario in which technology remains static, a projected improvement scenario aligned with stated industry and national goals, and an ambitious improvement scenario that intensifies or accelerates projected strategies to achieve carbon neutrality. 

    If no steps are taken to decarbonize pavement mixtures, the team projected that GHG emissions of construction materials used in the U.S. pavement network would increase by 19.5 percent by 2050. Under the projected scenario, there was an estimated 38 percent embodied impact reduction for concrete and 14 percent embodied impact reduction for asphalt by 2050.

    The keys to making the pavement network carbon neutral by 2050 lie in multiple places. Fully renewable energy sources should be used for pavement materials production, transportation, and other processes. The federal government must contribute to the development of these low-carbon energy sources and carbon capture technologies, as it would be nearly impossible to achieve carbon neutrality for pavements without them. 

    Additionally, increasing pavements’ recycled content and improving their design and production efficiency can lower GHG emissions to an extent. Still, neutralization is needed to achieve carbon neutrality.

    Making the right pavement construction and repair choices would also contribute to the carbon neutrality of the network. For instance, concrete pavements can offer GHG savings across the whole life cycle as they are stiffer and stay smoother for longer, meaning they require less maintenance and have a lesser impact on the fuel efficiency of vehicles. 

    Concrete pavements have other use-phase benefits including a cooling effect through an intrinsically high albedo, meaning they reflect more sunlight than regular pavements. Therefore, they can help combat extreme heat and positively affect the earth’s energy balance through positive radiative forcing, making albedo a potential neutralization mechanism.

    At the same time, a mix of fixes, including using concrete and asphalt in different contexts and proportions, could produce significant GHG savings for the pavement network; decision-makers must consider scenarios on a case-by-case basis to identify optimal solutions. 

    In addition, it may appear as though the GHG emissions of materials used in local roads are dwarfed by the emissions of interstate highway materials. However, the study found that the two road types have a similar impact. In fact, all road types contribute heavily to the total GHG emissions of pavement materials in general. Therefore, stakeholders at the federal, state, and local levels must be involved if our roads are to become carbon neutral. 

    The path to pavement network carbon-neutrality is, therefore, somewhat of a winding road. It demands regionally specific policies and widespread investment to help implement decarbonization solutions, just as renewable energy initiatives have been supported. Providing subsidies and covering the costs of premiums, too, are vital to avoid shifts in the market that would derail environmental savings.

    When planning for these shifts, we must recall that pavements have impacts not just in their production, but across their entire life cycle. As pavements are used, maintained, and eventually decommissioned, they have significant impacts on the surrounding environment.

    If we are to meet climate goals such as the Paris Agreement, which demands that we reach carbon-neutrality by 2050 to avoid the worst impacts of climate change, we — as well as industry and governmental stakeholders — must come together to take a hard look at the roads we use every day and work to reduce their life cycle emissions. 

    The study was published in the International Journal of Life Cycle Assessment. In addition to AzariJafari, the authors include Fengdi Guo of the MIT Department of Civil and Environmental Engineering; Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; and Randolph Kirchain, director of the MIT CSHub. More

  • in

    Manufacturing a cleaner future

    Manufacturing had a big summer. The CHIPS and Science Act, signed into law in August, represents a massive investment in U.S. domestic manufacturing. The act aims to drastically expand the U.S. semiconductor industry, strengthen supply chains, and invest in R&D for new technological breakthroughs. According to John Hart, professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT, the CHIPS Act is just the latest example of significantly increased interest in manufacturing in recent years.

    “You have multiple forces working together: reflections from the pandemic’s impact on supply chains, the geopolitical situation around the world, and the urgency and importance of sustainability,” says Hart. “This has now aligned incentives among government, industry, and the investment community to accelerate innovation in manufacturing and industrial technology.”

    Hand-in-hand with this increased focus on manufacturing is a need to prioritize sustainability.

    Roughly one-quarter of greenhouse gas emissions came from industry and manufacturing in 2020. Factories and plants can also deplete local water reserves and generate vast amounts of waste, some of which can be toxic.

    To address these issues and drive the transition to a low-carbon economy, new products and industrial processes must be developed alongside sustainable manufacturing technologies. Hart sees mechanical engineers as playing a crucial role in this transition.

    “Mechanical engineers can uniquely solve critical problems that require next-generation hardware technologies, and know how to bring their solutions to scale,” says Hart.

    Several fast-growing companies founded by faculty and alumni from MIT’s Department of Mechanical Engineering offer solutions for manufacturing’s environmental problem, paving the path for a more sustainable future.

    Gradiant: Cleantech water solutions

    Manufacturing requires water, and lots of it. A medium-sized semiconductor fabrication plant uses upward of 10 million gallons of water a day. In a world increasingly plagued by droughts, this dependence on water poses a major challenge.

    Gradiant offers a solution to this water problem. Co-founded by Anurag Bajpayee SM ’08, PhD ’12 and Prakash Govindan PhD ’12, the company is a pioneer in sustainable — or “cleantech” — water projects.

    As doctoral students in the Rohsenow Kendall Heat Transfer Laboratory, Bajpayee and Govindan shared a pragmatism and penchant for action. They both worked on desalination research — Bajpayee with Professor Gang Chen and Govindan with Professor John Lienhard.

    Inspired by a childhood spent during a severe drought in Chennai, India, Govindan developed for his PhD a humidification-dehumidification technology that mimicked natural rainfall cycles. It was with this piece of technology, which they named Carrier Gas Extraction (CGE), that the duo founded Gradiant in 2013.

    The key to CGE lies in a proprietary algorithm that accounts for variability in the quality and quantity in wastewater feed. At the heart of the algorithm is a nondimensional number, which Govindan proposes one day be called the “Lienhard Number,” after his doctoral advisor.

    “When the water quality varies in the system, our technology automatically sends a signal to motors within the plant to adjust the flow rates to bring back the nondimensional number to a value of one. Once it’s brought back to a value of one, you’re running in optimal condition,” explains Govindan, who serves as chief operating officer of Gradiant.

    This system can treat and clean the wastewater produced by a manufacturing plant for reuse, ultimately conserving millions of gallons of water each year.

    As the company has grown, the Gradiant team has added new technologies to their arsenal, including Selective Contaminant Extraction, a cost-efficient method that removes only specific contaminants, and a brine-concentration method called Counter-Flow Reverse Osmosis. They now offer a full technology stack of water and wastewater treatment solutions to clients in industries including pharmaceuticals, energy, mining, food and beverage, and the ever-growing semiconductor industry.

    “We are an end-to-end water solutions provider. We have a portfolio of proprietary technologies and will pick and choose from our ‘quiver’ depending on a customer’s needs,” says Bajpayee, who serves as CEO of Gradiant. “Customers look at us as their water partner. We can take care of their water problem end-to-end so they can focus on their core business.”

    Gradiant has seen explosive growth over the past decade. With 450 water and wastewater treatment plants built to date, they treat the equivalent of 5 million households’ worth of water each day. Recent acquisitions saw their total employees rise to above 500.

    The diversity of Gradiant’s solutions is reflected in their clients, who include Pfizer, AB InBev, and Coca-Cola. They also count semiconductor giants like Micron Technology, GlobalFoundries, Intel, and TSMC among their customers.

    “Over the last few years, we have really developed our capabilities and reputation serving semiconductor wastewater and semiconductor ultrapure water,” says Bajpayee.

    Semiconductor manufacturers require ultrapure water for fabrication. Unlike drinking water, which has a total dissolved solids range in the parts per million, water used to manufacture microchips has a range in the parts per billion or quadrillion.

    Currently, the average recycling rate at semiconductor fabrication plants — or fabs — in Singapore is only 43 percent. Using Gradiant’s technologies, these fabs can recycle 98-99 percent of the 10 million gallons of water they require daily. This reused water is pure enough to be put back into the manufacturing process.

    “What we’ve done is eliminated the discharge of this contaminated water and nearly eliminated the dependence of the semiconductor fab on the public water supply,” adds Bajpayee.

    With new regulations being introduced, pressure is increasing for fabs to improve their water use, making sustainability even more important to brand owners and their stakeholders.

    As the domestic semiconductor industry expands in light of the CHIPS and Science Act, Gradiant sees an opportunity to bring their semiconductor water treatment technologies to more factories in the United States.

    Via Separations: Efficient chemical filtration

    Like Bajpayee and Govindan, Shreya Dave ’09, SM ’12, PhD ’16 focused on desalination for her doctoral thesis. Under the guidance of her advisor Jeffrey Grossman, professor of materials science and engineering, Dave built a membrane that could enable more efficient and cheaper desalination.

    A thorough cost and market analysis brought Dave to the conclusion that the desalination membrane she developed would not make it to commercialization.

    “The current technologies are just really good at what they do. They’re low-cost, mass produced, and they worked. There was no room in the market for our technology,” says Dave.

    Shortly after defending her thesis, she read a commentary article in the journal Nature that changed everything. The article outlined a problem. Chemical separations that are central to many manufacturing processes require a huge amount of energy. Industry needed more efficient and cheaper membranes. Dave thought she might have a solution.

    After determining there was an economic opportunity, Dave, Grossman, and Brent Keller PhD ’16 founded Via Separations in 2017. Shortly thereafter, they were chosen as one of the first companies to receive funding from MIT’s venture firm, The Engine.

    Currently, industrial filtration is done by heating chemicals at very high temperatures to separate compounds. Dave likens it to making pasta by boiling all of the water off until it evaporates and all you are left with is the pasta noodles. In manufacturing, this method of chemical separation is extremely energy-intensive and inefficient.

    Via Separations has created the chemical equivalent of a “pasta strainer.” Rather than using heat to separate, their membranes “strain” chemical compounds. This method of chemical filtration uses 90 percent less energy than standard methods.

    While most membranes are made of polymers, Via Separations’ membranes are made with graphene oxide, which can withstand high temperatures and harsh conditions. The membrane is calibrated to the customer’s needs by altering the pore size and tuning the surface chemistry.

    Currently, Dave and her team are focusing on the pulp and paper industry as their beachhead market. They have developed a system that makes the recovery of a substance known as “black liquor” more energy efficient.

    “When tree becomes paper, only one-third of the biomass is used for the paper. Currently the most valuable use for the remaining two-thirds not needed for paper is to take it from a pretty dilute stream to a pretty concentrated stream using evaporators by boiling off the water,” says Dave.

    This black liquor is then burned. Most of the resulting energy is used to power the filtration process.

    “This closed-loop system accounts for an enormous amount of energy consumption in the U.S. We can make that process 84 percent more efficient by putting the ‘pasta strainer’ in front of the boiler,” adds Dave.

    VulcanForms: Additive manufacturing at industrial scale

    The first semester John Hart taught at MIT was a fruitful one. He taught a course on 3D printing, broadly known as additive manufacturing (AM). While it wasn’t his main research focus at the time, he found the topic fascinating. So did many of the students in the class, including Martin Feldmann MEng ’14.

    After graduating with his MEng in advanced manufacturing, Feldmann joined Hart’s research group full time. There, they bonded over their shared interest in AM. They saw an opportunity to innovate with an established metal AM technology, known as laser powder bed fusion, and came up with a concept to realize metal AM at an industrial scale.

    The pair co-founded VulcanForms in 2015.

    “We have developed a machine architecture for metal AM that can build parts with exceptional quality and productivity,” says Hart. “And, we have integrated our machines in a fully digital production system, combining AM, postprocessing, and precision machining.”

    Unlike other companies that sell 3D printers for others to produce parts, VulcanForms makes and sells parts for their customers using their fleet of industrial machines. VulcanForms has grown to nearly 400 employees. Last year, the team opened their first production factory, known as “VulcanOne,” in Devens, Massachusetts.

    The quality and precision with which VulcanForms produces parts is critical for products like medical implants, heat exchangers, and aircraft engines. Their machines can print layers of metal thinner than a human hair.

    “We’re producing components that are difficult, or in some cases impossible to manufacture otherwise,” adds Hart, who sits on the company’s board of directors.

    The technologies developed at VulcanForms may help lead to a more sustainable way to manufacture parts and products, both directly through the additive process and indirectly through more efficient, agile supply chains.

    One way that VulcanForms, and AM in general, promotes sustainability is through material savings.

    Many of the materials VulcanForms uses, such as titanium alloys, require a great deal of energy to produce. When titanium parts are 3D-printed, substantially less of the material is used than in a traditional machining process. This material efficiency is where Hart sees AM making a large impact in terms of energy savings.

    Hart also points out that AM can accelerate innovation in clean energy technologies, ranging from more efficient jet engines to future fusion reactors.

    “Companies seeking to de-risk and scale clean energy technologies require know-how and access to advanced manufacturing capability, and industrial additive manufacturing is transformative in this regard,” Hart adds.

    LiquiGlide: Reducing waste by removing friction

    There is an unlikely culprit when it comes to waste in manufacturing and consumer products: friction. Kripa Varanasi, professor of mechanical engineering, and the team at LiquiGlide are on a mission to create a frictionless future, and substantially reduce waste in the process.

    Founded in 2012 by Varanasi and alum David Smith SM ’11, LiquiGlide designs custom coatings that enable liquids to “glide” on surfaces. Every last drop of a product can be used, whether it’s being squeezed out of a tube of toothpaste or drained from a 500-liter tank at a manufacturing plant. Making containers frictionless substantially minimizes wasted product, and eliminates the need to clean a container before recycling or reusing.

    Since launching, the company has found great success in consumer products. Customer Colgate utilized LiquiGlide’s technologies in the design of the Colgate Elixir toothpaste bottle, which has been honored with several industry awards for design. In a collaboration with world- renowned designer Yves Béhar, LiquiGlide is applying their technology to beauty and personal care product packaging. Meanwhile, the U.S. Food and Drug Administration has granted them a Device Master Filing, opening up opportunities for the technology to be used in medical devices, drug delivery, and biopharmaceuticals.

    In 2016, the company developed a system to make manufacturing containers frictionless. Called CleanTanX, the technology is used to treat the surfaces of tanks, funnels, and hoppers, preventing materials from sticking to the side. The system can reduce material waste by up to 99 percent.

    “This could really change the game. It saves wasted product, reduces wastewater generated from cleaning tanks, and can help make the manufacturing process zero-waste,” says Varanasi, who serves as chair at LiquiGlide.

    LiquiGlide works by creating a coating made of a textured solid and liquid lubricant on the container surface. When applied to a container, the lubricant remains infused within the texture. Capillary forces stabilize and allow the liquid to spread on the surface, creating a continuously lubricated surface that any viscous material can slide right down. The company uses a thermodynamic algorithm to determine the combinations of safe solids and liquids depending on the product, whether it’s toothpaste or paint.

    The company has built a robotic spraying system that can treat large vats and tanks at manufacturing plants on site. In addition to saving companies millions of dollars in wasted product, LiquiGlide drastically reduces the amount of water needed to regularly clean these containers, which normally have product stuck to the sides.

    “Normally when you empty everything out of a tank, you still have residue that needs to be cleaned with a tremendous amount of water. In agrochemicals, for example, there are strict regulations about how to deal with the resulting wastewater, which is toxic. All of that can be eliminated with LiquiGlide,” says Varanasi.

    While the closure of many manufacturing facilities early in the pandemic slowed down the rollout of CleanTanX pilots at plants, things have picked up in recent months. As manufacturing ramps up both globally and domestically, Varanasi sees a growing need for LiquiGlide’s technologies, especially for liquids like semiconductor slurry.

    Companies like Gradiant, Via Separations, VulcanForms, and LiquiGlide demonstrate that an expansion in manufacturing industries does not need to come at a steep environmental cost. It is possible for manufacturing to be scaled up in a sustainable way.

    “Manufacturing has always been the backbone of what we do as mechanical engineers. At MIT in particular, there is always a drive to make manufacturing sustainable,” says Evelyn Wang, Ford Professor of Engineering and former head of the Department of Mechanical Engineering. “It’s amazing to see how startups that have an origin in our department are looking at every aspect of the manufacturing process and figuring out how to improve it for the health of our planet.”

    As legislation like the CHIPS and Science Act fuels growth in manufacturing, there will be an increased need for startups and companies that develop solutions to mitigate the environmental impact, bringing us closer to a more sustainable future. More

  • in

    On batteries, teaching, and world peace

    Over his long career as an electrochemist and professor, Donald Sadoway has earned an impressive variety of honors, from being named one of Time magazine’s 100 most influential people in 2012 to appearing on “The Colbert Report,” where he talked about “renewable energy and world peace,” according to Comedy Central.

    What does he personally consider to be his top achievements?

    “That’s easy,” he says immediately. “For teaching, it’s 3.091,” the MIT course on solid-state chemistry he led for some 18 years. An MIT core requirement, 3.091 is also one of the largest classes at the Institute. In 2003 it was the largest, with 630 students. Sadoway, who retires this year after 45 years in the Department of Materials Science and Engineering, estimates that over the years he’s taught the course to some 10,000 undergraduates.

    A passion for teaching

    Along the way he turned the class into an MIT favorite, complete with music, art, and literature. “I brought in all that enrichment because I knew that 95 percent of the students in that room weren’t going to major in anything chemical and this might be the last class they’d take in the subject. But it’s a requirement. So they’re 18 years old, they’re very smart, and many of them are very bored. You have to find a hook [to reach them]. And I did.”

    In 1995, Sadoway was named a Margaret MacVicar Faculty Fellow, an honor that recognizes outstanding classroom teaching at the Institute. Among the communications in support of his nomination:

    “His contributions are enormous and the class is in rapt attention from beginning to end. His lectures are highly articulate yet animated and he has uncommon grace and style. I was awed by his ability to introduce playful and creative elements into a core lecture…”

    Bill Gates would agree. In the early 2000s Sadoway’s lectures were shared with the world through OpenCourseWare, the web-based publication of MIT course materials. Gates was so inspired by the lectures that he asked to meet with Sadoway to learn more about his research. (Sadoway initially ignored Gates’ email because he thought his account had been hacked by MIT pranksters.)

    Research breakthroughs

    Teaching is not Sadoway’s only passion. He’s also proud of his accomplishments in electrochemistry. The discipline that involves electron transfer reactions is key to everything from batteries to the primary extraction of metals like aluminum and magnesium. “It’s quite wide-ranging,” says the John F. Elliott Professor Emeritus of Materials Chemistry.

    Sadoway’s contributions include two battery breakthroughs. First came the liquid metal battery, which could enable the large-scale storage of renewable energy. “That represents a huge step forward in the transition to green energy,” said António Campinos, president of the European Patent Office, earlier this year when Sadoway won the 2022 European Inventor Award for the invention in the category for Non-European Patent Office Countries.

    On “The Colbert Report,” Sadoway alluded to that work when he told Stephen Colbert that electrochemistry is the key to world peace. Why? Because it could lead to a battery capable of storing energy from the sun when the sun doesn’t shine and otherwise make renewables an important part of the clean energy mix. And that in turn could “plummet the price of petroleum and depose dictators all over the world without one shot being fired,” he recently recalled.

    The liquid metal battery is the focus of Ambri, one of six companies based on Sadoway’s inventions. Bill Gates was the first funder of the company, which formed in 2010 and aims to install its first battery soon. That battery will store energy from a reported 500 megawatts of on-site renewable generation, the same output as a natural gas power plant.

    Then, in August of this year, Sadoway and colleagues published a paper in Nature about “one of the first new battery chemistries in 30 years,” Sadoway says. “I wanted to invent something that was better, much better,” than the expensive lithium-ion batteries used in, for example, today’s electric cars.

    That battery is the focus of Avanti, one of three Sadoway companies formed just last year. The other two are Pure Lithium, to commercialize his inventions related to that element, and Sadoway Labs. The latter, a nonprofit, is essentially “a space to try radical innovations. We’re gonna start working on wild ideas.”

    Another focus of Sadoway’s research: green steel. Steelmaking produces huge amounts of greenhouse gases. Enter Boston Metal, another Sadoway company. This one is developing a new approach to producing steel based on research begun some 25 years ago. Unlike the current technology for producing steel, the Boston Metal approach — molten oxide electrolysis — does not use the element at the root of steel’s problems: carbon. The principal byproduct of the new system? Oxygen.

    In 2012, Sadoway gave a TED talk to 2,000 people on the liquid metal battery. He believes that that talk, which has now been seen by almost 2.5 million people, led to the wider publicity of his work — and science overall — on “The Colbert Report” and elsewhere. “The moral here is that if you step out of your comfort zone, you might be surprised at what can happen,” he concludes.

    Colleagues’ reflections

    “I met Don in 2006 when I was working for the iron and steel industry in Europe on ways to reduce greenhouse gas emissions from the production of those materials,” says Antoine Allanore, professor of metallurgy, Department of Materials Science and Engineering. “He was the same Don Sadoway that you see in recordings of his lectures: very elegant, very charismatic, and passionate about the technical solutions and underlying science of the process we were all investigating; electrolysis. A few years later, when I decided to pursue an academic career, I contacted Don and became a postdoctoral associate in his lab. That ultimately led to my becoming an MIT professor. People don’t believe me, but before I came to MIT the only thing I knew about the Institute was that Noam Chomsky was there … and Don Sadoway. And I felt, that’s a great place to be. And I stayed because I saw the exceptional things that can be accomplished at MIT and Don is the perfect example of that.”

    “I had the joy of meeting Don when I first arrived on the MIT campus in 1994,” recalls Felice Frankel, research scientist in the MIT departments of Chemical Engineering and Mechanical Engineering. “I didn’t have to talk him into the idea that researchers needed to take their images and graphics more seriously.  He got it — that it wasn’t just about pretty pictures. He was an important part of our five-year National Science Foundation project — Picturing to Learn — to bring that concept into the classroom. How lucky that was for me!”

    “Don has been a friend and mentor since we met in 1995 when I was an MIT senior,” says Luis Ortiz, co-founder and chief executive officer, Avanti Battery Co. “One story that is emblematic of Don’s insistence on excellence is from when he and I met with Bill Gates about the challenges in addressing climate change and how batteries could be the linchpin in solving them. I suggested that we create our presentation in PowerPoint [Microsoft software]. Don balked. He insisted that we present using Keynote on his MacBook Air, because ‘it looks so much better.’ I was incredulous that he wanted to walk into that venue exclusively using Apple products. Of course, he won the argument, but not without my admonition that there had better not be even a blip of an issue. In the meeting room, Microsoft’s former chief technology officer asked Don if he needed anything to hook up to the screen, ‘we have all those dongles.’ Don declined, but gave me that knowing look and whispered, ‘You see, they know, too.’ I ate my crow and we had a great long conversation without any issues.”

    “I remember when I first started working with Don on the liquid metal battery project at MIT, after I had chosen it as the topic for my master’s of engineering thesis,” adds David Bradwell, co-founder and chief technology officer, Ambri. “I was a wide-eyed graduate student, sitting in his office, amongst his art deco decorations, unique furniture, and historical and stylistic infographics, and from our first meeting, I could see Don’s passion for coming up with new and creative, yet practical scientific ideas, and for working on hard problems, in service of society. Don’s approaches always appear to be unconventional — wanting to stand out in a crowd, take the path less trodden, both based on his ideas, and his sense of style. It’s been an amazing journey working with him over the past decade-and-a-half, and I remain excited to see what other new, unconventional ideas, he can bring to this world.” More