More stories

  • in

    MIT’s work with Idaho National Laboratory advances America’s nuclear industry

    At the center of nuclear reactors across the United States, a new type of chromium-coated fuel is being used to make the reactors more efficient and more resistant to accidents. The fuel is one of many innovations sprung from collaboration between researchers at MIT and the Idaho National Laboratory (INL) — a relationship that has altered the trajectory of the country’s nuclear industry.Amid renewed excitement around nuclear energy in America, MIT’s research community is working to further develop next-generation fuels, accelerate the deployment of small modular reactors (SMRs), and enable the first nuclear reactor in space.Researchers at MIT and INL have worked closely for decades, and the collaboration takes many forms, including joint research efforts, student and postdoc internships, and a standing agreement that lets INL employees spend extended periods on MIT’s campus researching and teaching classes. MIT is also a founding member of the Battelle Energy Alliance, which has managed the Idaho National Laboratory for the Department of Energy since 2005.The collaboration gives MIT’s community a chance to work on the biggest problems facing America’s nuclear industry while bolstering INL’s research infrastructure.“The Idaho National Laboratory is the lead lab for nuclear energy technology in the United States today — that’s why it’s essential that MIT works hand in hand with INL,” says Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering at MIT. “Countless MIT students and postdocs have interned at INL over the years, and a memorandum of understanding that strengthened the collaboration between MIT and INL in 2019 has been extended twice.”Ian Waitz, MIT’s vice president for research, adds, “The strong collaborative history between MIT and the Idaho National Laboratory enables us to jointly contribute practical technologies to enable the growth of clean, safe nuclear energy. It’s a clear example of how rigorous collaboration across sectors, and among the nation’s top research facilities, can advance U.S. economic prosperity, health, and well-being.”Research with impactMuch of MIT’s joint research with INL involves tests and simulations of new nuclear materials, fuels, and instrumentation. One of the largest collaborations was part of a global push for more accident-tolerant fuels in the wake of the nuclear accident that followed the 2011 earthquake and tsunami in Fukushima, Japan.In a series of studies involving INL and members of the nuclear energy industry, MIT researchers helped identify and evaluate alloy materials that could be deployed in the near term to not only bolster safety but also offer higher densities of fuel.“These new alloys can withstand much more challenging conditions during abnormal occurrences without reacting chemically with steam, which could result in hydrogen explosions during accidents,” explains Buongiorno, who is also the director of science and technology at MIT’s Nuclear Reactor Laboratory and the director of MIT’s Center for Advanced Nuclear Energy Systems. “The fuels can take much more abuse without breaking apart in the reactor, resulting in a higher safety margin.”The fuels tested at MIT were eventually adopted by power plants across the U.S., starting with the Byron Clean Energy Center in Ogle County, Illinois.“We’re also developing new materials, fuels, and instrumentation,” Buongiorno says. “People don’t just come to MIT and say, ‘I have this idea, evaluate it for me.’ We collaborate with industry and national labs to develop the new ideas together, and then we put them to the test,  reproducing the environment in which these materials and fuels would operate in commercial power reactors. That capability is quite unique.”Another major collaboration was led by Koroush Shirvan, MIT’s Atlantic Richfield Career Development Professor in Energy Studies. Shirvan’s team analyzed the costs associated with different reactor designs, eventually developing an open-source tool to help industry leaders evaluate the feasibility of different approaches.“The reason we’re not building a single nuclear reactor in the U.S. right now is cost and financial risk,” Shirvan says. “The projects have gone over budget by a factor of two and their schedule has lengthened by a factor of 1.5, so we’ve been doing a lot of work assessing the risk drivers. There’s also a lot of different types of reactors proposed, so we’ve looked at their cost potential as well and how those costs change if you can mass manufacture them.”Other INL-supported research of Shirvan’s involves exploring new manufacturing methods for nuclear fuels and testing materials for use in a nuclear reactor on the surface of the moon.“You want materials that are lightweight for these nuclear reactors because you have to send them to space, but there isn’t much data around how those light materials perform in nuclear environments,” Shirvan says.People and progressEvery summer, MIT students at every level travel to Idaho to conduct research in INL labs as interns.“It’s an example of our students getting access to cutting-edge research facilities,” Shirvan says.There are also several joint research appointments between the institutions. One such appointment is held by Sacit Cetiner, a distinguished scientist at INL who also currently runs the MIT and INL Joint Center for Reactor Instrumentation and Sensor Physics (CRISP) at MIT’s Nuclear Reactor Laboratory.CRISP focuses its research on key technology areas in the field of instrumentation and controls, which have long stymied the bottom line of nuclear power generation.“For the current light-water reactor fleet, operations and maintenance expenditures constitute a sizeable fraction of unit electricity generation cost,” says Cetiner. “In order to make advanced reactors economically competitive, it’s much more reasonable to address anticipated operational issues during the design phase. One such critical technology area is remote and autonomous operations. Working directly with INL, which manages the projects for the design and testing of several advanced reactors under a number of federal programs, gives our students, faculty, and researchers opportunities to make a real impact.”The sharing of experts helps strengthen MIT and the nation’s nuclear workforce overall.“MIT has a crucial role to play in advancing the country’s nuclear industry, whether that’s testing and developing new technologies or assessing the economic feasibility of new nuclear designs,” Buongiorno says. More

  • in

    Confronting the AI/energy conundrum

    The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.AI’s startling energy demandsFrom the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation’s electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.Strategies for clean energy solutionsThe symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.Can AI accelerate the energy transition?Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT’s Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year,” she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.Securing growth with sustainabilityThroughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.Navigating the AI-energy paradoxThe symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”In addition, attendees revealed that most view AI’s potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following. More

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    The role of modeling in the energy transition

    Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what’s going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”Dialogue, not forecastsEIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”Modeling uncertaintyThe key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”The future of energy modelingLooking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”  More

  • in

    Solar-powered desalination system requires no extra batteries

    MIT engineers have built a new desalination system that runs with the rhythms of the sun.The solar-powered system removes salt from water at a pace that closely follows changes in solar energy. As sunlight increases through the day, the system ramps up its desalting process and automatically adjusts to any sudden variation in sunlight, for example by dialing down in response to a passing cloud or revving up as the skies clear.Because the system can quickly react to subtle changes in sunlight, it maximizes the utility of solar energy, producing large quantities of clean water despite variations in sunlight throughout the day. In contrast to other solar-driven desalination designs, the MIT system requires no extra batteries for energy storage, nor a supplemental power supply, such as from the grid.The engineers tested a community-scale prototype on groundwater wells in New Mexico over six months, working in variable weather conditions and water types. The system harnessed on average over 94 percent of the electrical energy generated from the system’s solar panels to produce up to 5,000 liters of water per day despite large swings in weather and available sunlight.“Conventional desalination technologies require steady power and need battery storage to smooth out a variable power source like solar. By continually varying power consumption in sync with the sun, our technology directly and efficiently uses solar power to make water,” says Amos Winter, the Germeshausen Professor of Mechanical Engineering and director of the K. Lisa Yang Global Engineering and Research (GEAR) Center at MIT. “Being able to make drinking water with renewables, without requiring battery storage, is a massive grand challenge. And we’ve done it.”The system is geared toward desalinating brackish groundwater — a salty source of water that is found in underground reservoirs and is more prevalent than fresh groundwater resources. The researchers see brackish groundwater as a huge untapped source of potential drinking water, particularly as reserves of fresh water are stressed in parts of the world. They envision that the new renewable, battery-free system could provide much-needed drinking water at low costs, especially for inland communities where access to seawater and grid power are limited.“The majority of the population actually lives far enough from the coast, that seawater desalination could never reach them. They consequently rely heavily on groundwater, especially in remote, low-income regions. And unfortunately, this groundwater is becoming more and more saline due to climate change,” says Jonathan Bessette, MIT PhD student in mechanical engineering. “This technology could bring sustainable, affordable clean water to underreached places around the world.”The researchers report details the new system in a paper appearing today in Nature Water. The study’s co-authors are Bessette, Winter, and staff engineer Shane Pratt.Pump and flowThe new system builds on a previous design, which Winter and his colleagues, including former MIT postdoc Wei He, reported earlier this year. That system aimed to desalinate water through “flexible batch electrodialysis.”Electrodialysis and reverse osmosis are two of the main methods used to desalinate brackish groundwater. With reverse osmosis, pressure is used to pump salty water through a membrane and filter out salts. Electrodialysis uses an electric field to draw out salt ions as water is pumped through a stack of ion-exchange membranes.Scientists have looked to power both methods with renewable sources. But this has been especially challenging for reverse osmosis systems, which traditionally run at a steady power level that’s incompatible with naturally variable energy sources such as the sun.Winter, He, and their colleagues focused on electrodialysis, seeking ways to make a more flexible, “time-variant” system that would be responsive to variations in renewable, solar power.In their previous design, the team built an electrodialysis system consisting of water pumps, an ion-exchange membrane stack, and a solar panel array. The innovation in this system was a model-based control system that used sensor readings from every part of the system to predict the optimal rate at which to pump water through the stack and the voltage that should be applied to the stack to maximize the amount of salt drawn out of the water.When the team tested this system in the field, it was able to vary its water production with the sun’s natural variations. On average, the system directly used 77 percent of the available electrical energy produced by the solar panels, which the team estimated was 91 percent more than traditionally designed solar-powered electrodialysis systems.Still, the researchers felt they could do better.“We could only calculate every three minutes, and in that time, a cloud could literally come by and block the sun,” Winter says. “The system could be saying, ‘I need to run at this high power.’ But some of that power has suddenly dropped because there’s now less sunlight. So, we had to make up that power with extra batteries.”Solar commandsIn their latest work, the researchers looked to eliminate the need for batteries, by shaving the system’s response time to a fraction of a second. The new system is able to update its desalination rate, three to five times per second. The faster response time enables the system to adjust to changes in sunlight throughout the day, without having to make up any lag in power with additional power supplies.The key to the nimbler desalting is a simpler control strategy, devised by Bessette and Pratt. The new strategy is one of “flow-commanded current control,” in which the system first senses the amount of solar power that is being produced by the system’s solar panels. If the panels are generating more power than the system is using, the controller automatically “commands” the system to dial up its pumping, pushing more water through the electrodialysis stacks. Simultaneously, the system diverts some of the additional solar power by increasing the electrical current delivered to the stack, to drive more salt out of the faster-flowing water.“Let’s say the sun is rising every few seconds,” Winter explains. “So, three times a second, we’re looking at the solar panels and saying, ‘Oh, we have more power — let’s bump up our flow rate and current a little bit.’ When we look again and see there’s still more excess power, we’ll up it again. As we do that, we’re able to closely match our consumed power with available solar power really accurately, throughout the day. And the quicker we loop this, the less battery buffering we need.”The engineers incorporated the new control strategy into a fully automated system that they sized to desalinate brackish groundwater at a daily volume that would be enough to supply a small community of about 3,000 people. They operated the system for six months on several wells at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. Throughout the trial, the prototype operated under a wide range of solar conditions, harnessing over 94 percent of the solar panel’s electrical energy, on average, to directly power desalination.“Compared to how you would traditionally design a solar desal system, we cut our required battery capacity by almost 100 percent,” Winter says.The engineers plan to further test and scale up the system in hopes of supplying larger communities, and even whole municipalities, with low-cost, fully sun-driven drinking water.“While this is a major step forward, we’re still working diligently to continue developing lower cost, more sustainable desalination methods,” Bessette says.“Our focus now is on testing, maximizing reliability, and building out a product line that can provide desalinated water using renewables to multiple markets around the world,” Pratt adds.The team will be launching a company based on their technology in the coming months.This research was supported in part by the National Science Foundation, the Julia Burke Foundation, and the MIT Morningside Academy of Design. This work was additionally supported in-kind by Veolia Water Technologies and Solutions and Xylem Goulds.  More

  • in

    3 Questions: Bridging anthropology and engineering for clean energy in Mongolia

    In 2021, Michael Short, an associate professor of nuclear science and engineering, approached professor of anthropology Manduhai Buyandelger with an unusual pitch: collaborating on a project to prototype a molten salt heat bank in Mongolia, Buyandelger’s country of origin and place of her scholarship. It was also an invitation to forge a novel partnership between two disciplines that rarely overlap. Developed in collaboration with the National University of Mongolia (NUM), the device was built to provide heat for people in colder climates, and in places where clean energy is a challenge. Buyandelger and Short teamed up to launch Anthro-Engineering Decarbonization at the Million-Person Scale, an initiative intended to advance the heat bank idea in Mongolia, and ultimately demonstrate its potential as a scalable clean heat source in comparably challenging sites around the world. This project received funding from the inaugural MIT Climate and Sustainability Consortium Seed Awards program. In order to fund various components of the project, especially student involvement and additional staff, the project also received support from the MIT Global Seed Fund, New Engineering Education Transformation (NEET), Experiential Learning Office, Vice Provost for International Activities, and d’Arbeloff Fund for Excellence in Education.As part of this initiative, the partners developed a special topic course in anthropology to teach MIT undergraduates about Mongolia’s unique energy and climate challenges, as well as the historical, social, and economic context in which the heat bank would ideally find a place. The class 21A.S01 (Anthro-Engineering: Decarbonization at the Million-Person Scale) prepares MIT students for a January Independent Activities Period (IAP) trip to the Mongolian capital of Ulaanbaatar, where they embed with Mongolian families, conduct research, and collaborate with their peers. Mongolian students also engaged in the project. Anthropology research scientist and lecturer Lauren Bonilla, who has spent the past two decades working in Mongolia, joined to co-teach the class and lead the IAP trips to Mongolia. With the project now in its third year and yielding some promising solutions on the ground, Buyandelger and Bonilla reflect on the challenges for anthropologists of advancing a clean energy technology in a developing nation with a unique history, politics, and culture. Q: Your roles in the molten salt heat bank project mark departures from your typical academic routine. How did you first approach this venture?Buyandelger: As an anthropologist of contemporary religion, politics, and gender in Mongolia, I have had little contact with the hard sciences or building or prototyping technology. What I do best is listening to people and working with narratives. When I first learned about this device for off-the-grid heating, a host of issues came straight to mind right away that are based on socioeconomic and cultural context of the place. The salt brick, which is encased in steel, must be heated to 400 degrees Celsius in a central facility, then driven to people’s homes. Transportation is difficult in Ulaanbaatar, and I worried about road safety when driving the salt brick to gers [traditional Mongolian homes] where many residents live. The device seemed a bit utopian to me, but I realized that this was an amazing educational opportunity: We could use the heat bank as part of an ethnographic project, so students could learn about the everyday lives of people — crucially, in the dead of winter — and how they might respond to this new energy technology in the neighborhoods of Ulaanbaatar.Bonilla: When I first went to Mongolia in the early 2000s as an undergraduate student, the impacts of climate change were already being felt. There had been a massive migration to the capital after a series of terrible weather events that devastated the rural economy. Coal mining had emerged as a vital part of the economy, and I was interested in how people regarded this industry that both provided jobs and damaged the air they breathed. I am trained as a human geographer, which involves seeing how things happening in a local place correspond to things happening at a global scale. Thinking about climate or sustainability from this perspective means making linkages between social life and environmental life. In Mongolia, people associated coal with national progress. Based on historical experience, they had low expectations for interventions brought by outsiders to improve their lives. So my first take on the molten salt project was that this was no silver bullet solution. At the same time, I wanted to see how we could make this a great project-based learning experience for students, getting them to think about the kind of research necessary to see if some version of the molten salt would work.Q: After two years, what lessons have you and the students drawn from both the class and the Ulaanbaatar field trips?Buyandelger: We wanted to make sure MIT students would not go to Mongolia and act like consultants. We taught them anthropological methods so they could understand the experiences of real people and think about how to bring people and new technologies together. The students, from engineering and anthropological and social science backgrounds, became critical thinkers who could analyze how people live in ger districts. When they stay with families in Ulaanbaatar in January, they not only experience the cold and the pollution, but they observe what people do for work, how parents care for their children, how they cook, sleep, and get from one place to another. This enables them to better imagine and test out how these people might utilize the molten salt heat bank in their homes.Bonilla: In class, students learn that interventions like this often fail because the implementation process doesn’t work, or the technology doesn’t meet people’s real needs. This is where anthropology is so important, because it opens up the wider landscape in which you’re intervening. We had really difficult conversations about the professional socialization of engineers and social scientists. Engineers love to work within boxes, but don’t necessarily appreciate the context in which their invention will serve.As a group, we discussed the provocative notion that engineers construct and anthropologists deconstruct. This makes it seem as if engineers are creators, and anthropologists are brought in as add-ons to consult and critique engineers’ creations. Our group conversation concluded that a project such as ours benefits from an iterative back-and-forth between the techno-scientific and humanistic disciplines.Q: So where does the molten salt brick project stand?Bonilla: Our research in Mongolia helped us produce a prototype that can work: Our partners at NUM are developing a hybrid stove that incorporates the molten salt brick. Supervised by instructor Nathan Melenbrink of MIT’s NEET program, our engineering students have been involved in this prototyping as well.The concept is for a family to heat it up using a coal fire once a day and it warms their home overnight. Based on our anthropological research, we believe that this stove would work better than the device as originally conceived. It won’t eliminate coal use in residences, but it will reduce emissions enough to have a meaningful impact on ger districts in Ulaanbaatar. The challenge now is getting funding to NUM so they can test different salt combinations and stove models and employ local blacksmiths to work on the design.This integrated stove/heat bank will not be the ultimate solution to the heating and pollution crisis in Mongolia. But it will be something that can inspire even more ideas. We feel with this project we are planting all kinds of seeds that will germinate in ways we cannot anticipate. It has sparked new relationships between MIT and Mongolian students, and catalyzed engineers to integrate a more humanistic, anthropological perspective in their work.Buyandelger: Our work illustrates the importance of anthropology in responding to the unpredictable and diverse impacts of climate change. Without our ethnographic research — based on participant observation and interviews, led by Dr. Bonilla, — it would have been impossible to see how the prototyping and modifications could be done, and where the molten salt brick could work and what shape it needed to take. This project demonstrates how indispensable anthropology is in moving engineering out of labs and companies and directly into communities.Bonilla: This is where the real solutions for climate change are going to come from. Even though we need solutions quickly, it will also take time for new technologies like molten salt bricks to take root and grow. We don’t know where the outcomes of these experiments will take us. But there’s so much that’s emerging from this project that I feel very hopeful about. More

  • in

    Affordable high-tech windows for comfort and energy savings

    Imagine if the windows of your home didn’t transmit heat. They’d keep the heat indoors in winter and outdoors on a hot summer’s day. Your heating and cooling bills would go down; your energy consumption and carbon emissions would drop; and you’d still be comfortable all year ’round.AeroShield, a startup spun out of MIT, is poised to start manufacturing such windows. Building operations make up 36 percent of global carbon dioxide emissions, and today’s windows are a major contributor to energy inefficiency in buildings. To improve building efficiency, AeroShield has developed a window technology that promises to reduce heat loss by up to 65 percent, significantly reducing energy use and carbon emissions in buildings, and the company just announced the opening of a new facility to manufacture its breakthrough energy-efficient windows.“Our mission is to decarbonize the built environment,” says Elise Strobach SM ’17, PhD ’20, co-founder and CEO of AeroShield. “The availability of affordable, thermally insulating windows will help us achieve that goal while also reducing homeowner’s heating and cooling bills.” According to the U.S. Department of Energy, for most homeowners, 30 percent of that bill results from window inefficiencies.Technology development at MITResearch on AeroShield’s window technology began a decade ago in the MIT lab of Evelyn Wang, Ford Professor of Engineering, now on leave to serve as director of the Advanced Research Projects Agency-Energy (ARPA-E). In late 2014, the MIT team received funding from ARPA-E, and other sponsors followed, including the MIT Energy Initiative through the MIT Tata Center for Technology and Design in 2016.The work focused on aerogels, remarkable materials that are ultra-porous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow. Aerogels were invented in the 1930s and used by NASA and others as thermal insulation. The team at MIT saw the potential for incorporating aerogel sheets into windows to keep heat from escaping or entering buildings. But there was one problem: Nobody had been able to make aerogels transparent.An aerogel is made of transparent, loosely connected nanoscale silica particles and is 95 percent air. But an aerogel sheet isn’t transparent because light traveling through it gets scattered by the silica particles.After five years of theoretical and experimental work, the MIT team determined that the key to transparency was having the silica particles both small and uniform in size. This allows light to pass directly through, so the aerogel becomes transparent. Indeed, as long as the particle size is small and uniform, increasing the thickness of an aerogel sheet to achieve greater thermal insulation won’t make it less clear.Teams in the MIT lab looked at various applications for their super-insulating, transparent aerogels. Some focused on improving solar thermal collectors by making the systems more efficient and less expensive. But to Strobach, increasing the thermal efficiency of windows looked especially promising and potentially significant as a means of reducing climate change.The researchers determined that aerogel sheets could be inserted into the gap in double-pane windows, making them more than twice as insulating. The windows could then be manufactured on existing production lines with minor changes, and the resulting windows would be affordable and as wide-ranging in style as the window options available today. Best of all, once purchased and installed, the windows would reduce electricity bills, energy use, and carbon emissions.The impact on energy use in buildings could be considerable. “If we only consider winter, windows in the United States lose enough energy to power over 50 million homes,” says Strobach. “That wasted energy generates about 350 million tons of carbon dioxide — more than is emitted by 76 million cars.” Super-insulating windows could help home and building owners reduce carbon dioxide emissions by gigatons while saving billions in heating and cooling costs.The AeroShield storyIn 2019, Strobach and her MIT colleagues — Aaron Baskerville-Bridges MBA ’20, SM ’20 and Kyle Wilke PhD ’19 — co-founded AeroShield to further develop and commercialize their aerogel-based technology for windows and other applications. And in the subsequent five years, their hard work has attracted attention, recently leading to two major accomplishments.In spring 2024, the company announced the opening of its new pilot manufacturing facility in Waltham, Massachusetts, where the team will be producing, testing, and certifying their first full-size windows and patio doors for initial product launch. The 12,000 square foot facility will significantly expand the company’s capabilities, with cutting-edge aerogel R&D labs, manufacturing equipment, assembly lines, and testing equipment. Says Strobach, “Our pilot facility will supply window and door manufacturers as we launch our first products and will also serve as our R&D headquarters as we develop the next generation of energy-efficient products using transparent aerogels.”Also in spring 2024, AeroShield received a $14.5 million award from ARPA-E’s “Seeding Critical Advances for Leading Energy technologies with Untapped Potential” (SCALEUP) program, which provides new funding to previous ARPA-E awardees that have “demonstrated a viable path to market.” That funding will enable the company to expand its production capacity to tens of thousands, or even hundreds of thousands, of units per year.Strobach also cites two less-obvious benefits of the SCALEUP award.First, the funding is enabling the company to move more quickly on the scale-up phase of their technology development. “We know from our fundamental studies and lab experiments that we can make large-area aerogel sheets that could go in an entry or patio door,” says Elise. “The SCALEUP award allows us to go straight for that vision. We don’t have to do all the incremental sizes of aerogels to prove that we can make a big one. The award provides capital for us to buy the big equipment to make the big aerogel.”Second, the SCALEUP award confirms the viability of the company to other potential investors and collaborators. Indeed, AeroShield recently announced $5 million of additional funding from existing investors Massachusetts Clean Energy Center and MassVentures, as well as new investor MassMutual Ventures. Strobach notes that the company now has investor, engineering, and customer partners.She stresses the importance of partners in achieving AeroShield’s mission. “We know that what we’ve got from a fundamental perspective can change the industry,” she says. “Now we want to go out and do it. With the right partners and at the right pace, we may actually be able to increase the energy efficiency of our buildings early enough to help make a real dent in climate change.” More

  • in

    MIT students combat climate anxiety through extracurricular teams

    Climate anxiety affects nearly half of young people aged 16-25. Students like second-year Rachel Mohammed find hope and inspiration through her involvement in innovative climate solutions, working alongside peers who share her determination. “I’ve met so many people at MIT who are dedicated to finding climate solutions in ways that I had never imagined, dreamed of, or heard of. That is what keeps me going, and I’m doing my part,” she says.Hydrogen-fueled enginesHydrogen offers the potential for zero or near-zero emissions, with the ability to reduce greenhouse gases and pollution by 29 percent. However, the hydrogen industry faces many challenges related to storage solutions and costs.Mohammed leads the hydrogen team on MIT’s Electric Vehicle Team (EVT), which is dedicated to harnessing hydrogen power to build a cleaner, more sustainable future. EVT is one of several student-led build teams at the Edgerton Center focused on innovative climate solutions. Since its founding in 1992, the Edgerton Center has been a hub for MIT students to bring their ideas to life.Hydrogen is mostly used in large vehicles like trucks and planes because it requires a lot of storage space. EVT is building their second iteration of a motorcycle based on what Mohammed calls a “goofy hypothesis” that you can use hydrogen to power a small vehicle. The team employs a hydrogen fuel cell system, which generates electricity by combining hydrogen with oxygen. However, the technology faces challenges, particularly in storage, which EVT is tackling with innovative designs for smaller vehicles.Presenting at the 2024 World Hydrogen Summit reaffirmed Mohammed’s confidence in this project. “I often encounter skepticism, with people saying it’s not practical. Seeing others actively working on similar initiatives made me realize that we can do it too,” Mohammed says.The team’s first successful track test last October allowed them to evaluate the real-world performance of their hydrogen-powered motorcycle, marking a crucial step in proving the feasibility and efficiency of their design.MIT’s Sustainable Engine Team (SET), founded by junior Charles Yong, uses the combustion method to generate energy with hydrogen. This is a promising technology route for high-power-density applications, like aviation, but Yong believes it hasn’t received enough attention. Yong explains, “In the hydrogen power industry, startups choose fuel cell routes instead of combustion because gas turbine industry giants are 50 years ahead. However, these giants are moving very slowly toward hydrogen due to its not-yet-fully-developed infrastructure. Working under the Edgerton Center allows us to take risks and explore advanced tech directions to demonstrate that hydrogen combustion can be readily available.”Both EVT and SET are publishing their research and providing detailed instructions for anyone interested in replicating their results.Running on sunshineThe Solar Electric Vehicle Team powers a car built from scratch with 100 percent solar energy.The team’s single-occupancy car Nimbus won the American Solar Challenge two years in a row. This year, the team pushed boundaries further with Gemini, a multiple-occupancy vehicle that challenges conventional perceptions of solar-powered cars.Senior Andre Greene explains, “the challenge comes from minimizing how much energy you waste because you work with such little energy. It’s like the equivalent power of a toaster.”Gemini looks more like a regular car and less like a “spaceship,” as NBC’s 1st Look affectionately called Nimbus. “It more resembles what a fully solar-powered car could look like versus the single-seaters. You don’t see a lot of single-seater cars on the market, so it’s opening people’s minds,” says rising junior Tessa Uviedo, team captain.All-electric since 2013The MIT Motorsports team switched to an all-electric powertrain in 2013. Captain Eric Zhou takes inspiration from China, the world’s largest market for electric vehicles. “In China, there is a large government push towards electric, but there are also five or six big companies almost as large as Tesla size, building out these electric vehicles. The competition drives the majority of vehicles in China to become electric.”The team is also switching to four-wheel drive and regenerative braking next year, which reduces the amount of energy needed to run. “This is more efficient and better for power consumption because the torque from the motors is applied straight to the tires. It’s more efficient than having a rear motor that must transfer torque to both rear tires. Also, you’re taking advantage of all four tires in terms of producing grip, while you can only rely on the back tires in a rear-wheel-drive car,” Zhou says.Zhou adds that Motorsports wants to help prepare students for the electric vehicle industry. “A large majority of upperclassmen on the team have worked, or are working, at Tesla or Rivian.”Former Motorsports powertrain lead Levi Gershon ’23, SM ’24 recently founded CRABI Robotics — a fully autonomous marine robotic system designed to conduct in-transit cleaning of marine vessels by removing biofouling, increasing vessels’ fuel efficiency.An Indigenous approach to sustainable rocketsFirst Nations Launch, the all-Indigenous student rocket team, recently won the Grand Prize in the 2024 NASA First Nations Launch High-Power Rocket Competition. Using Indigenous methodologies, this team considers the environment in the materials and methods they employ.“The environmental impact is always something that we consider when we’re making design decisions and operational decisions. We’ve thought about things like biodegradable composites and parachutes,” says rising junior Hailey Polson, team captain. “Aerospace has been a very wasteful industry in the past. There are huge leaps and bounds being made with forward progress in regard to reusable rockets, which is definitely lowering the environmental impact.”Collecting climate change data with autonomous boatsArcturus, the recent first-place winner in design at the 16th Annual RoboBoat Competition, is developing autonomous surface vehicles that can greatly aid in marine research. “The ocean is one of our greatest resources to combat climate change; thus, the accessibility of data will help scientists understand climate patterns and predict future trends. This can help people learn how to prepare for potential disasters and how to reduce each of our carbon footprints,” says Arcturus captain and rising junior Amy Shi.“We are hoping to expand our outreach efforts to incorporate more sustainability-related programs. This can include more interactions with local students to introduce them to how engineering can make a positive impact in the climate space or other similar programs,” Shi says.Shi emphasizes that hope is a crucial force in the battle against climate change. “There are great steps being taken every day to combat this seemingly impending doom we call the climate crisis. It’s important to not give up hope, because this hope is what’s driving the leaps and bounds of innovation happening in the climate community. The mainstream media mostly reports on the negatives, but the truth is there is a lot of positive climate news every day. Being more intentional about where you seek your climate news can really help subside this feeling of doom about our planet.” More