More stories

  • in

    MIT Climate “Plug-In” highlights first year of progress on MIT’s climate plan

    In a combined in-person and virtual event on Monday, members of the three working groups established last year under MIT’s “Fast Forward” climate action plan reported on the work they’ve been doing to meet the plan’s goals, including reaching zero direct carbon emissions by 2026.

    Introducing the session, Vice President for Research Maria Zuber said that “many universities have climate plans that are inward facing, mostly focused on the direct impacts of their operations on greenhouse gas emissions. And that is really important, but ‘Fast Forward’ is different in that it’s also outward facing — it recognizes climate change as a global crisis.”

    That, she said, “commits us to an all-of-MIT effort to help the world solve the super wicked problem in practice.” That means “helping the world to go as far as it can, as fast as it can, to deploy currently available technologies and policies to reduce greenhouse gas emissions,” while also quickly developing new tools and approaches to deal with the most difficult areas of decarbonization, she said.

    Significant strides have been made in this first year, according to Zuber. The Climate Grand Challenges competition, announced last year as part of the plan, has just announced five flagship projects. “Each of these projects is potentially important in its own right, and is also exemplary of the kinds of bold thinking about climate solutions that the world needs,” she said.

    “We’ve also created new climate-focused institutions within MIT to improve accountability and transparency and to drive action,” Zuber said, including the Climate Nucleus, which comprises heads of labs and departments involved in climate-change work and is led by professors Noelle Selin and Anne White. The “Fast Forward” plan also established three working groups that report to the Climate Nucleus — on climate education, climate policy, and MIT’s carbon footprint — whose members spoke at Monday’s event.

    David McGee, a professor of earth, atmospheric and planetary science, co-director of MIT’s Terrascope program for first-year students, and co-chair of the education working group, said that over the last few years of Terrascope, “we’ve begun focusing much more explicitly on the experiences of, and the knowledge contained within, impacted communities … both for mitigation efforts and how they play out, and also adaptation.” Figuring out how to access the expertise of local communities “in a way that’s not extractive is a challenge that we face,” he added.

    Eduardo Rivera, managing director for MIT International Science and Technology Initiatives (MISTI) programs in several countries and a member of the education team, noted that about 1,000 undergraduates travel each year to work on climate and sustainability challenges. These include, for example, working with a lab in Peru assessing pollution in the Amazon, developing new insulation materials in Germany, developing affordable solar panels in China, working on carbon-capture technology in France or Israel, and many others, Rivera said. These are “unique opportunities to learn about the discipline, where the students can do hands-on work along with the professionals and the scientists in the front lines.” He added that MISTI has just launched a pilot project to help these students “to calculate their carbon footprint, to give them resources, and to understand individual responsibilities and collective responsibilities in this area.”

    Yujie Wang, a graduate student in architecture and an education working group member, said that during her studies she worked on a project focused on protecting biodiversity in Colombia, and also worked with a startup to reduce pesticide use in farming through digital monitoring. In Colombia, she said, she came to appreciate the value of interactions among researchers using satellite data, with local organizations, institutions and officials, to foster collaboration on solving common problems.

    The second panel addressed policy issues, as reflected by the climate policy working group. David Goldston, director of MIT’s Washington office, said “I think policy is totally central, in that for each part of the climate problem, you really can’t make progress without policy.” Part of that, he said, “involves government activities to help communities, and … to make sure the transition [involving the adoption of new technologies] is as equitable as possible.”

    Goldston said “a lot of the progress that’s been made already, whether it’s movement toward solar and wind energy and many other things, has been really prompted by government policy. I think sometimes people see it as a contest, should we be focusing on technology or policy, but I see them as two sides of the same coin. … You can’t get the technology you need into operation without policy tools, and the policy tools won’t have anything to work with unless technology is developed.”

    As for MIT, he said, “I think everybody at MIT who works on any aspect of climate change should be thinking about what’s the policy aspect of it, how could policy help them? How could they help policymakers? I think we need to coordinate better.” The Institute needs to be more strategic, he said, but “that doesn’t mean MIT advocating for specific policies. It means advocating for climate action and injecting a wide range of ideas into the policy arena.”

    Anushree Chaudhari, a student in economics and in urban studies and planning, said she has been learning about the power of negotiations in her work with Professor Larry Susskind. “What we’re currently working on is understanding why there are so many sources of local opposition to scaling renewable energy projects in the U.S.,” she explained. “Even though over 77 percent of the U.S. population actually is in support of renewables, and renewables are actually economically pretty feasible as their costs have come down in the last two decades, there’s still a huge social barrier to having them become the new norm,” she said. She emphasized that a fair and just energy transition will require listening to community stakeholders, including indigenous groups and low-income communities, and understanding why they may oppose utility-scale solar farms and wind farms.

    Joy Jackson, a graduate student in the Technology and Policy Program, said that the implementation of research findings into policy at state, local, and national levels is a “very messy, nonlinear, sort of chaotic process.” One avenue for research to make its way into policy, she said, is through formal processes, such as congressional testimony. But a lot is also informal, as she learned while working as an intern in government offices, where she and her colleagues reached out to professors, researchers, and technical experts of various kinds while in the very early stages of policy development.

    “The good news,” she said, “is there’s a lot of touch points.”

    The third panel featured members of the working group studying ways to reduce MIT’s own carbon footprint. Julie Newman, head of MIT’s Office of Sustainability and co-chair of that group, summed up MIT’s progress toward its stated goal of achieving net zero carbon emissions by 2026. “I can cautiously say we’re on track for that one,” she said. Despite headwinds in the solar industry due to supply chain issues, she said, “we’re well positioned” to meet that near-term target.

    As for working toward the 2050 target of eliminating all direct emissions, she said, it is “quite a challenge.” But under the leadership of Joe Higgins, the vice president for campus services and stewardship, MIT is implementing a number of measures, including deep energy retrofits, investments in high-performance buildings, an extremely efficient central utilities plant, and more.

    She added that MIT is particularly well-positioned in its thinking about scaling its solutions up. “A couple of years ago we approached a handful of local organizations, and over a couple of years have built a consortium to look at large-scale carbon reduction in the world. And it’s a brilliant partnership,” she said, noting that details are still being worked out and will be reported later.

    The work is challenging, because “MIT was built on coal, this campus was not built to get to zero carbon emissions.” Nevertheless, “we think we’re on track” to meet the ambitious goals of the Fast Forward plan, she said. “We’re going to have to have multiple pathways, because we may come to a pathway that may turn out not to be feasible.”

    Jay Dolan, head of facilities development at MIT’s Lincoln Laboratory, said that campus faces extra hurdles compared to the main MIT campus, as it occupies buildings that are owned and maintained by the U.S. Air Force, not MIT. They are still at the data-gathering stage to see what they can do to improve their emissions, he said, and a website they set up to solicit suggestions for reducing their emissions had received 70 suggestions within a few days, which are still being evaluated. “All that enthusiasm, along with the intelligence at the laboratory, is very promising,” he said.

    Peter Jacobson, a graduate student in Leaders for Global Operations, said that in his experience, projects that are most successful start not from a focus on the technology, but from collaborative efforts working with multiple stakeholders. “I think this is exactly why the Climate Nucleus and our working groups are so important here at MIT,” he said. “We need people tasked with thinking at this campus scale, figuring out what the needs and priorities of all the departments are and looking for those synergies, and aligning those needs across both internal and external stakeholders.”

    But, he added, “MIT’s complexity and scale of operations definitely poses unique challenges. Advanced research is energy hungry, and in many cases we don’t have the technology to decarbonize those research processes yet. And we have buildings of varying ages with varying stages of investment.” In addition, MIT has “a lot of people that it needs to feed, and that need to travel and commute, so that poses additional and different challenges.”

    Asked what individuals can do to help MIT in this process, Newman said, “Begin to leverage and figure out how you connect your research to informing our thinking on campus. We have channels for that.”

    Noelle Selin, co-chair of MIT’s climate nucleus and moderator of the third panel, said in conclusion “we’re really looking for your input into all of these working groups and all of these efforts. This is a whole of campus effort. It’s a whole of world effort to address the climate challenge. So, please get in touch and use this as a call to action.” More

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Team creates map for production of eco-friendly metals

    In work that could usher in more efficient, eco-friendly processes for producing important metals like lithium, iron, and cobalt, researchers from MIT and the SLAC National Accelerator Laboratory have mapped what is happening at the atomic level behind a particularly promising approach called metal electrolysis.

    By creating maps for a wide range of metals, they not only determined which metals should be easiest to produce using this approach, but also identified fundamental barriers behind the efficient production of others. As a result, the researchers’ map could become an important design tool for optimizing the production of all these metals.

    The work could also aid the development of metal-air batteries, cousins of the lithium-ion batteries used in today’s electric vehicles.

    Most of the metals key to society today are produced using fossil fuels. These fuels generate the high temperatures necessary to convert the original ore into its purified metal. But that process is a significant source of greenhouse gases — steel alone accounts for some 7 percent of carbon dioxide emissions globally. As a result, researchers from around the world are working to identify more eco-friendly ways for the production of metals.

    One promising approach is metal electrolysis, in which a metal oxide, the ore, is zapped with electricity to create pure metal with oxygen as the byproduct. That is the reaction explored at the atomic level in new research reported in the April 8 issue of the journal Chemistry of Materials.

    Donald Siegel is department chair and professor of mechanical engineering at the University of Texas at Austin. Says Siegel, who was not involved in the Chemistry of Materials study: “This work is an important contribution to improving the efficiency of metal production from metal oxides. It clarifies our understanding of low-carbon electrolysis processes by tracing the underlying thermodynamics back to elementary metal-oxygen interactions. I expect that this work will aid in the creation of design rules that will make these industrially important processes less reliant on fossil fuels.”

    Yang Shao-Horn, the JR East Professor of Engineering in MIT’s Department of Materials Science and Engineering (DMSE) and Department of Mechanical Engineering, is a leader of the current work, with Michal Bajdich of SLAC.

    “Here we aim to establish some basic understanding to predict the efficiency of electrochemical metal production and metal-air batteries from examining computed thermodynamic barriers for the conversion between metal and metal oxides,” says Shao-Horn, who is on the research team for MIT’s new Center for Electrification and Decarbonization of Industry, a winner of the Institute’s first-ever Climate Grand Challenges competition. Shao-Horn is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics.

    In addition to Shao-Horn and Bajdich, other authors of the Chemistry of Materials paper are Jaclyn R. Lunger, first author and a DMSE graduate student; mechanical engineering senior Naomi Lutz; and DMSE graduate student Jiayu Peng.

    Other applications

    The work could also aid in developing metal-air batteries such as lithium-air, aluminum-air, and zinc-air batteries. These cousins of the lithium-ion batteries used in today’s electric vehicles have the potential to electrify aviation because their energy densities are much higher. However, they are not yet on the market due to a variety of problems including inefficiency.

    Charging metal-air batteries also involves electrolysis. As a result, the new atomic-level understanding of these reactions could not only help engineers develop efficient electrochemical routes for metal production, but also design more efficient metal-air batteries.

    Learning from water splitting

    Electrolysis is also used to split water into oxygen and hydrogen, which stores the resulting energy. That hydrogen, in turn, could become an eco-friendly alternative to fossil fuels. Since much more is known about water electrolysis, the focus of Bajdich’s work at SLAC, than the electrolysis of metal oxides, the team compared the two processes for the first time.

    The result: “Slowly, we uncovered the elementary steps involved in metal electrolysis,” says Bajdich. The work was challenging, says Lunger, because “it was unclear to us what those steps are. We had to figure out how to get from A to B,” or from a metal oxide to metal and oxygen.

    All of the work was conducted with supercomputer simulations. “It’s like a sandbox of atoms, and then we play with them. It’s a little like Legos,” says Bajdich. More specifically, the team explored different scenarios for the electrolysis of several metals. Each involved different catalysts, molecules that boost the speed of a reaction.

    Says Lunger, “To optimize the reaction, you want to find the catalyst that makes it most efficient.” The team’s map is essentially a guide for designing the best catalysts for each different metal.

    What’s next? Lunger noted that the current work focused on the electrolysis of pure metals. “I’m interested in seeing what happens in more complex systems involving multiple metals. Can you make the reaction more efficient if there’s sodium and lithium present, or cadmium and cesium?”

    This work was supported by a U.S. Department of Energy Office of Science Graduate Student Research award. It was also supported by an MIT Energy Initiative fellowship, the Toyota Research Institute through the Accelerated Materials Design and Discovery Program, the Catalysis Science Program of Department of Energy, Office of Basic Energy Sciences, and by the Differentiate Program through the U.S. Advanced Research Projects Agency — Energy.  More

  • in

    Absent legislative victory, the president can still meet US climate goals

    The most recent United Nations climate change report indicates that without significant action to mitigate global warming, the extent and magnitude of climate impacts — from floods to droughts to the spread of disease — could outpace the world’s ability to adapt to them. The latest effort to introduce meaningful climate legislation in the United States Congress, the Build Back Better bill, has stalled. The climate package in that bill — $555 billion in funding for climate resilience and clean energy — aims to reduce U.S. greenhouse gas emissions by about 50 percent below 2005 levels by 2030, the nation’s current Paris Agreement pledge. With prospects of passing a standalone climate package in the Senate far from assured, is there another pathway to fulfilling that pledge?

    Recent detailed legal analysis shows that there is at least one viable option for the United States to achieve the 2030 target without legislative action. Under Section 115 on International Air Pollution of the Clean Air Act, the U.S. Environmental Protection Agency (EPA) could assign emissions targets to the states that collectively meet the national goal. The president could simply issue an executive order to empower the EPA to do just that. But would that be prudent?

    A new study led by researchers at the MIT Joint Program on the Science and Policy of Global Change explores how, under a federally coordinated carbon dioxide emissions cap-and-trade program aligned with the U.S. Paris Agreement pledge and implemented through Section 115 of the Clean Air Act, the EPA might allocate emissions cuts among states. Recognizing that the Biden or any future administration considering this strategy would need to carefully weigh its benefits against its potential political risks, the study highlights the policy’s net economic benefits to the nation.

    The researchers calculate those net benefits by combining the estimated total cost of carbon dioxide emissions reduction under the policy with the corresponding estimated expenditures that would be avoided as a result of the policy’s implementation — expenditures on health care due to particulate air pollution, and on society at large due to climate impacts.

    Assessing three carbon dioxide emissions allocation strategies (each with legal precedent) for implementing Section 115 to return cap-and-trade program revenue to the states and distribute it to state residents on an equal per-capita basis, the study finds that at the national level, the economic net benefits are substantial, ranging from $70 to $150 billion in 2030. The results appear in the journal Environmental Research Letters.

    “Our findings not only show significant net gains to the U.S. economy under a national emissions policy implemented through the Clean Air Act’s Section 115,” says Mei Yuan, a research scientist at the MIT Joint Program and lead author of the study. “They also show the policy impact on consumer costs may differ across states depending on the choice of allocation strategy.”

    The national price on carbon needed to achieve the policy’s emissions target, as well as the policy’s ultimate cost to consumers, are substantially lower than those found in studies a decade earlier, although in line with other recent studies. The researchers speculate that this is largely due to ongoing expansion of ambitious state policies in the electricity sector and declining renewable energy costs. The policy is also progressive, consistent with earlier studies, in that equal lump-sum distribution of allowance revenue to state residents generally leads to net benefits to lower-income households. Regional disparities in consumer costs can be moderated by the allocation of allowances among states.

    State-by-state emissions estimates for the study are derived from MIT’s U.S. Regional Energy Policy model, with electricity sector detail of the Renewable Energy Development System model developed by the U.S. National Renewable Energy Laboratory; air quality benefits are estimated using U.S. EPA and other models; and the climate benefits estimate is based on the social cost of carbon, the U.S. federal government’s assessment of the economic damages that would result from emitting one additional ton of carbon dioxide into the atmosphere (currently $51/ton, adjusted for inflation). 

    “In addition to illustrating the economic, health, and climate benefits of a Section 115 implementation, our study underscores the advantages of a policy that imposes a uniform carbon price across all economic sectors,” says John Reilly, former co-director of the MIT Joint Program and a study co-author. “A national carbon price would serve as a major incentive for all sectors to decarbonize.” More

  • in

    Material designed to improve power plant efficiency wins 2022 Water Innovation Prize

    The winner of this year’s Water Innovation Prize is a company commercializing a material that could dramatically improve the efficiency of power plants.

    The company, Mesophase, is developing a more efficient power plant steam condenser that leverages a surface coating developed in the lab of Evelyn Wang, MIT’s Ford Professor of Engineering and the head of the Department of Mechanical Engineering. Such condensers, which convert steam into water, sit at the heart of the energy extraction process in most of the world’s power plants.

    In the winning pitch, company founders said they believe their low-cost, durable coating will improve the heat transfer performance of such condensers.

    “What makes us excited about this technology is that in the condenser field, this is the first time we’ve seen a coating that can last long enough for industrial applications and be made with a high potential to scale up,” said Yajing Zhao SM ’18, who is currently a PhD candidate in mechanical engineering at MIT. “When compared to what’s available in academia and industry, we believe you’ll see record performance in terms of both heat transfer and lifetime.”

    In most power plants, condensers cool steam to turn it into water. The pressure change caused by that conversion creates a vacuum that pulls steam through a turbine. Mesophase’s patent-pending surface coating improves condensers’ ability to transfer heat, thus allowing operators to extract power more efficiently.

    Based on lab tests, the company predicts it can increase power plant output by up to 7 percent using existing infrastructure. Because steam condensers are used around the world, this advance could help increase global electricity production by 500 terawatt hours per year, which is equivalent to the electricity supply for about 1 billion people.

    The efficiency gains will also lead to less water use. Water sent from cooling towers is a common means of keeping condensers cool. The company estimates its system could reduce fresh water withdrawals by the equivalent of what is used by 50 million people per year.

    After running pilots, the company believes the new material could be installed in power plants during the regularly scheduled maintenance that occurs every two to five years. The company is also planning to work with existing condenser manufacturers to get to market faster.

    “This all works because a condenser with our technology in it has significantly more attractive economics than what you find in the market today,” says Mesophase’s Michael Gangemi, an MBA candidate at MIT’s Sloan School of Management.

    The company plans to start in the U.S. geothermal space, where Mesophase estimates its technology is worth about $800 million a year.

    “Much of the geothermal capacity in the U.S. was built in the ’50s and ’60s,” Gangemi said. “That means most of these plants are operating way below capacity, and they invest frequently in technology like ours just to maintain their power output.”

    The company will use the prize money, in part, to begin testing in a real power plant environment.

    “We are excited about these developments, but we know that they are only first steps as we move toward broader energy applications,” Gangemi said.

    MIT’s Water Innovation Prize helps translate water-related research and ideas into businesses and impact. Each year, student-led finalist teams pitch their innovations to students, faculty, investors, and people working in various water-related industries.

    This year’s event, held in a virtual hybrid format in MIT’s Media Lab, included five finalist teams. The second-place $15,000 award was given to Livingwater Systems, which provides portable rainwater collection and filtration systems to displaced and off-grid communities.

    The company’s product consists of a low-cost mesh that goes on roofs to collect the water and a collapsible storage unit that incorporates a sediment filter. The water becomes drinkable after applying chlorine tablets to the storage unit.

    “Perhaps the single greatest attraction of our units is their elegance and simplicity,” Livingwater CEO Joshua Kao said in the company’s pitch. “Anyone can take advantage of their easy, do-it-yourself setup without any preexisting knowhow.”

    The company says the system works on the pitched roofs used in many off-grid settlements, refugee camps, and slums. The entire unit fits inside a backpack.

    The team also notes existing collection systems cost thousands of dollars, require expert installation, and can’t be attached to surfaces like tents. Livingwater is aiming to partner with nongovernmental organizations and nonprofit entities to sell its systems for $60 each, which would represent significant cost savings when compared to alternatives like busing water into settlements.

    The company will be running a paid pilot with the World Food Program this fall.

    “Support from MIT will be crucial for building the core team on the ground,” said Livingwater’s Gabriela Saade, a master’s student in public policy at the University of Chicago. “Let’s begin to realize a new era of water security in Latin America and across the globe.”

    The third-place $10,000 prize went to Algeon Materials, which is creating sustainable and environmentally friendly bioplastics from kelp. Algeon also won the $5,000 audience choice award for its system, which doesn’t require water, fertilizer, or land to produce.

    The other finalists were:

    Flowless, which uses artificial intelligence and an internet of things (IoT) platform to detect leaks and optimize water-related processes to reduce waste;
    Hydrologistics Africa Ltd, a platform to help consumers and utilities manage their water consumption; and
    Watabot, which is developing autonomous, artificial intelligence-powered systems to monitor harmful algae in real time and predict algae activity.

    Each year the Water Innovation Prize, hosted by the MIT Water Club, awards up to $50,000 in grants to teams from around the world. This year’s program received over 50 applications. A group of 20 semifinalist teams spent one month working with mentors to refine their pitches and business plans, and the final field of finalists received another month of mentorship.

    The Water Innovation Prize started in 2015 and has awarded more than $275,000 to 24 different teams to date. More

  • in

    Surface coating designed to improve power plant efficiency wins 2022 Water Innovation Prize

    The winner of this year’s Water Innovation Prize is a company commercializing a material that could dramatically improve the efficiency of power plants.

    The company, Mesophase, is developing a more efficient power plant steam condenser that leverages a surface coating developed in the lab of Evelyn Wang, MIT’s Ford Professor of Engineering and the head of the Department of Mechanical Engineering. Such condensers, which convert steam into water, sit at the heart of the energy extraction process in most of the world’s power plants.

    In the winning pitch, company founders said they believe their low-cost, durable coating will improve the heat transfer performance of such condensers.

    “What makes us excited about this technology is that in the condenser field, this is the first time we’ve seen a coating that can last long enough for industrial applications and be made with a high potential to scale up,” said Yajing Zhao SM ’18, who is currently a PhD candidate in mechanical engineering at MIT. “When compared to what’s available in academia and industry, we believe you’ll see record performance in terms of both heat transfer and lifetime.”

    In most power plants, condensers cool steam to turn it into water. The pressure change caused by that conversion creates a vacuum that pulls steam through a turbine. Mesophase’s patent-pending surface coating improves condensers’ ability to transfer heat, thus allowing operators to extract power more efficiently.

    Based on lab tests, the company predicts it can increase power plant output by up to 7 percent using existing infrastructure. Because steam condensers are used around the world, this advance could help increase global electricity production by 500 terawatt hours per year, which is equivalent to the electricity supply for about 1 billion people.

    The efficiency gains will also lead to less water use. Water sent from cooling towers is a common means of keeping condensers cool. The company estimates its system could reduce fresh water withdrawals by the equivalent of what is used by 50 million people per year.

    After running pilots, the company believes the new material could be installed in power plants during the regularly scheduled maintenance that occurs every two to five years. The company is also planning to work with existing condenser manufacturers to get to market faster.

    “This all works because a condenser with our technology in it has significantly more attractive economics than what you find in the market today,” says Mesophase’s Michael Gangemi, an MBA candidate at MIT’s Sloan School of Management.

    The company plans to start in the U.S. geothermal space, where Mesophase estimates its technology is worth about $800 million a year.

    “Much of the geothermal capacity in the U.S. was built in the ’50s and ’60s,” Gangemi said. “That means most of these plants are operating way below capacity, and they invest frequently in technology like ours just to maintain their power output.”

    The company will use the prize money, in part, to begin testing in a real power plant environment.

    “We are excited about these developments, but we know that they are only first steps as we move toward broader energy applications,” Gangemi said.

    MIT’s Water Innovation Prize helps translate water-related research and ideas into businesses and impact. Each year, student-led finalist teams pitch their innovations to students, faculty, investors, and people working in various water-related industries.

    This year’s event, held in a virtual hybrid format in MIT’s Media Lab, included five finalist teams. The second-place $15,000 award was given to Livingwater Systems, which provides portable rainwater collection and filtration systems to displaced and off-grid communities.

    The company’s product consists of a low-cost mesh that goes on roofs to collect the water and a collapsible storage unit that incorporates a sediment filter. The water becomes drinkable after applying chlorine tablets to the storage unit.

    “Perhaps the single greatest attraction of our units is their elegance and simplicity,” Livingwater CEO Joshua Kao said in the company’s pitch. “Anyone can take advantage of their easy, do-it-yourself setup without any preexisting knowhow.”

    The company says the system works on the pitched roofs used in many off-grid settlements, refugee camps, and slums. The entire unit fits inside a backpack.

    The team also notes existing collection systems cost thousands of dollars, require expert installation, and can’t be attached to surfaces like tents. Livingwater is aiming to partner with nongovernmental organizations and nonprofit entities to sell its systems for $60 each, which would represent significant cost savings when compared to alternatives like busing water into settlements.

    The company will be running a paid pilot with the World Food Program this fall.

    “Support from MIT will be crucial for building the core team on the ground,” said Livingwater’s Gabriela Saade, a master’s student in public policy at the University of Chicago. “Let’s begin to realize a new era of water security in Latin America and across the globe.”

    The third-place $10,000 prize went to Algeon Materials, which is creating sustainable and environmentally friendly bioplastics from kelp. Algeon also won the $5,000 audience choice award for its system, which doesn’t require water, fertilizer, or land to produce.

    The other finalists were:

    Flowless, which uses artificial intelligence and an internet of things (IoT) platform to detect leaks and optimize water-related processes to reduce waste;
    Hydrologistics Africa Ltd, a platform to help consumers and utilities manage their water consumption; and
    Watabot, which is developing autonomous, artificial intelligence-powered systems to monitor harmful algae in real time and predict algae activity.

    Each year the Water Innovation Prize, hosted by the MIT Water Club, awards up to $50,000 in grants to teams from around the world. This year’s program received over 50 applications. A group of 20 semifinalist teams spent one month working with mentors to refine their pitches and business plans, and the final field of finalists received another month of mentorship.

    The Water Innovation Prize started in 2015 and has awarded more than $275,000 to 24 different teams to date. More

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More