More stories

  • in

    J-WAFS: Supporting food and water research across MIT

    MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has transformed the landscape of water and food research at MIT, driving faculty engagement and catalyzing new research and innovation in these critical areas. With philanthropic, corporate, and government support, J-WAFS’ strategic approach spans the entire research life cycle, from support for early-stage research to commercialization grants for more advanced projects.Over the past decade, J-WAFS has invested approximately $25 million in direct research funding to support MIT faculty pursuing transformative research with the potential for significant impact. “Since awarding our first cohort of seed grants in 2015, it’s remarkable to look back and see that over 10 percent of the MIT faculty have benefited from J-WAFS funding,” observes J-WAFS Executive Director Renee J. Robins ’83. “Many of these professors hadn’t worked on water or food challenges before their first J-WAFS grant.” By fostering interdisciplinary collaborations and supporting high-risk, high-reward projects, J-WAFS has amplified the capacity of MIT faculty to pursue groundbreaking research that addresses some of the world’s most pressing challenges facing our water and food systems.Drawing MIT faculty to water and food researchJ-WAFS open calls for proposals enable faculty to explore bold ideas and develop impactful approaches to tackling critical water and food system challenges. Professor Patrick Doyle’s work in water purification exemplifies this impact. “Without J-WAFS, I would have never ventured into the field of water purification,” Doyle reflects. While previously focused on pharmaceutical manufacturing and drug delivery, exposure to J-WAFS-funded peers led him to apply his expertise in soft materials to water purification. “Both the funding and the J-WAFS community led me to be deeply engaged in understanding some of the key challenges in water purification and water security,” he explains.Similarly, Professor Otto Cordero of the Department of Civil and Environmental Engineering (CEE) leveraged J-WAFS funding to pivot his research into aquaculture. Cordero explains that his first J-WAFS seed grant “has been extremely influential for my lab because it allowed me to take a step in a new direction, with no preliminary data in hand.” Cordero’s expertise is in microbial communities. He was previous unfamiliar with aquaculture, but he saw the relevance of microbial communities the health of farmed aquatic organisms.Supporting early-career facultyNew assistant professors at MIT have particularly benefited from J-WAFS funding and support. J-WAFS has played a transformative role in shaping the careers and research trajectories of many new faculty members by encouraging them to explore novel research areas, and in many instances providing their first MIT research grant.Professor Ariel Furst reflects on how pivotal J-WAFS’ investment has been in advancing her research. “This was one of the first grants I received after starting at MIT, and it has truly shaped the development of my group’s research program,” Furst explains. With J-WAFS’ backing, her lab has achieved breakthroughs in chemical detection and remediation technologies for water. “The support of J-WAFS has enabled us to develop the platform funded through this work beyond the initial applications to the general detection of environmental contaminants and degradation of those contaminants,” she elaborates. Karthish Manthiram, now a professor of chemical engineering and chemistry at Caltech, explains how J-WAFS’ early investment enabled him and other young faculty to pursue ambitious ideas. “J-WAFS took a big risk on us,” Manthiram reflects. His research on breaking the nitrogen triple bond to make ammonia for fertilizer was initially met with skepticism. However, J-WAFS’ seed funding allowed his lab to lay the groundwork for breakthroughs that later attracted significant National Science Foundation (NSF) support. “That early funding from J-WAFS has been pivotal to our long-term success,” he notes. These stories underscore the broad impact of J-WAFS’ support for early-career faculty, and its commitment to empowering them to address critical global challenges and innovate boldly.Fueling follow-on funding J-WAFS seed grants enable faculty to explore nascent research areas, but external funding for continued work is usually necessary to achieve the full potential of these novel ideas. “It’s often hard to get funding for early stage or out-of-the-box ideas,” notes J-WAFS Director Professor John H. Lienhard V. “My hope, when I founded J-WAFS in 2014, was that seed grants would allow PIs [principal investigators] to prove out novel ideas so that they would be attractive for follow-on funding. And after 10 years, J-WAFS-funded research projects have brought more than $21 million in subsequent awards to MIT.”Professor Retsef Levi led a seed study on how agricultural supply chains affect food safety, with a team of faculty spanning the MIT schools Engineering and Science as well as the MIT Sloan School of Management. The team parlayed their seed grant research into a multi-million-dollar follow-on initiative. Levi reflects, “The J-WAFS seed funding allowed us to establish the initial credibility of our team, which was key to our success in obtaining large funding from several other agencies.”Dave Des Marais was an assistant professor in the Department of CEE when he received his first J-WAFS seed grant. The funding supported his research on how plant growth and physiology are controlled by genes and interact with the environment. The seed grant helped launch his lab’s work addressing enhancing climate change resilience in agricultural systems. The work led to his Faculty Early Career Development (CAREER) Award from the NSF, a prestigious honor for junior faculty members. Now an associate professor, Des Marais’ ongoing project to further investigate the mechanisms and consequences of genomic and environmental interactions is supported by the five-year, $1,490,000 NSF grant. “J-WAFS providing essential funding to get my new research underway,” comments Des Marais.Stimulating interdisciplinary collaborationDes Marais’ seed grant was also key to developing new collaborations. He explains, “the J-WAFS grant supported me to develop a collaboration with Professor Caroline Uhler in EECS/IDSS [the Department of Electrical Engineering and Computer Science/Institute for Data, Systems, and Society] that really shaped how I think about framing and testing hypotheses. One of the best things about J-WAFS is facilitating unexpected connections among MIT faculty with diverse yet complementary skill sets.”Professors A. John Hart of the Department of Mechanical Engineering and Benedetto Marelli of CEE also launched a new interdisciplinary collaboration with J-WAFS funding. They partnered to join expertise in biomaterials, microfabrication, and manufacturing, to create printed silk-based colorimetric sensors that detect food spoilage. “The J-WAFS Seed Grant provided a unique opportunity for multidisciplinary collaboration,” Hart notes.Professors Stephen Graves in the MIT Sloan School of Management and Bishwapriya Sanyal in the Department of Urban Studies and Planning (DUSP) partnered to pursue new research on agricultural supply chains. With field work in Senegal, their J-WAFS-supported project brought together international development specialists and operations management experts to study how small firms and government agencies influence access to and uptake of irrigation technology by poorer farmers. “We used J-WAFS to spur a collaboration that would have been improbable without this grant,” they explain. Being part of the J-WAFS community also introduced them to researchers in Professor Amos Winter’s lab in the Department of Mechanical Engineering working on irrigation technologies for low-resource settings. DUSP doctoral candidate Mark Brennan notes, “We got to share our understanding of how irrigation markets and irrigation supply chains work in developing economies, and then we got to contrast that with their understanding of how irrigation system models work.”Timothy Swager, professor of chemistry, and Rohit Karnik, professor of mechanical engineering and J-WAFS associate director, collaborated on a sponsored research project supported by Xylem, Inc. through the J-WAFS Research Affiliate program. The cross-disciplinary research, which targeted the development of ultra-sensitive sensors for toxic PFAS chemicals, was conceived following a series of workshops hosted by J-WAFS. Swager and Karnik were two of the participants, and their involvement led to the collaborative proposal that Xylem funded. “J-WAFS funding allowed us to combine Swager lab’s expertise in sensing with my lab’s expertise in microfluidics to develop a cartridge for field-portable detection of PFAS,” says Karnik. “J-WAFS has enriched my research program in so many ways,” adds Swager, who is now working to commercialize the technology.Driving global collaboration and impactJ-WAFS has also helped MIT faculty establish and advance international collaboration and impactful global research. By funding and supporting projects that connect MIT researchers with international partners, J-WAFS has not only advanced technological solutions, but also strengthened cross-cultural understanding and engagement.Professor Matthew Shoulders leads the inaugural J-WAFS Grand Challenge project. In response to the first J-WAFS call for “Grand Challenge” proposals, Shoulders assembled an interdisciplinary team based at MIT to enhance and provide climate resilience to agriculture by improving the most inefficient aspect of photosynthesis, the notoriously-inefficient carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk/high-reward project following a competitive process that engaged external reviewers through a several rounds of iterative proposal development. The technical feedback to the team led them to researchers with complementary expertise from the Australian National University. “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”Professor Leon Glicksman and Research Engineer Eric Verploegen’s team designed a low-cost cooling chamber to preserve fruits and vegetables harvested by smallholder farmers with no access to cold chain storage. J-WAFS’ guidance motivated the team to prioritize practical considerations informed by local collaborators, ensuring market competitiveness. “As our new idea for a forced-air evaporative cooling chamber was taking shape, we continually checked that our solution was evolving in a direction that would be competitive in terms of cost, performance, and usability to existing commercial alternatives,” explains Verploegen. Following the team’s initial seed grant, the team secured a J-WAFS Solutions commercialization grant, which Verploegen say “further motivated us to establish partnerships with local organizations capable of commercializing the technology earlier in the project than we might have done otherwise.” The team has since shared an open-source design as part of its commercialization strategy to maximize accessibility and impact.Bringing corporate sponsored research opportunities to MIT facultyJ-WAFS also plays a role in driving private partnerships, enabling collaborations that bridge industry and academia. Through its Research Affiliate Program, for example, J-WAFS provides opportunities for faculty to collaborate with industry on sponsored research, helping to convert scientific discoveries into licensable intellectual property (IP) that companies can turn into commercial products and services.J-WAFS introduced professor of mechanical engineering Alex Slocum to a challenge presented by its research affiliate company, Xylem: how to design a more energy-efficient pump for fluctuating flows. With centrifugal pumps consuming an estimated 6 percent of U.S. electricity annually, Slocum and his then-graduate student Hilary Johnson SM ’18, PhD ’22 developed an innovative variable volute mechanism that reduces energy usage. “Xylem envisions this as the first in a new category of adaptive pump geometry,” comments Johnson. The research produced a pump prototype and related IP that Xylem is working on commercializing. Johnson notes that these outcomes “would not have been possible without J-WAFS support and facilitation of the Xylem industry partnership.” Slocum adds, “J-WAFS enabled Hilary to begin her work on pumps, and Xylem sponsored the research to bring her to this point … where she has an opportunity to do far more than the original project called for.”Swager speaks highly of the impact of corporate research sponsorship through J-WAFS on his research and technology translation efforts. His PFAS project with Karnik described above was also supported by Xylem. “Xylem was an excellent sponsor of our research. Their engagement and feedback were instrumental in advancing our PFAS detection technology, now on the path to commercialization,” Swager says.Looking forwardWhat J-WAFS has accomplished is more than a collection of research projects; a decade of impact demonstrates how J-WAFS’ approach has been transformative for many MIT faculty members. As Professor Mathias Kolle puts it, his engagement with J-WAFS “had a significant influence on how we think about our research and its broader impacts.” He adds that it “opened my eyes to the challenges in the field of water and food systems and the many different creative ideas that are explored by MIT.” This thriving ecosystem of innovation, collaboration, and academic growth around water and food research has not only helped faculty build interdisciplinary and international partnerships, but has also led to the commercialization of transformative technologies with real-world applications. C. Cem Taşan, the POSCO Associate Professor of Metallurgy who is leading a J-WAFS Solutions commercialization team that is about to launch a startup company, sums it up by noting, “Without J-WAFS, we wouldn’t be here at all.”  As J-WAFS looks to the future, its continued commitment — supported by the generosity of its donors and partners — builds on a decade of success enabling MIT faculty to advance water and food research that addresses some of the world’s most pressing challenges. More

  • in

    Streamlining data collection for improved salmon population management

    Sara Beery came to MIT as an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) eager to focus on ecological challenges. She has fashioned her research career around the opportunity to apply her expertise in computer vision, machine learning, and data science to tackle real-world issues in conservation and sustainability. Beery was drawn to the Institute’s commitment to “computing for the planet,” and set out to bring her methods to global-scale environmental and biodiversity monitoring.In the Pacific Northwest, salmon have a disproportionate impact on the health of their ecosystems, and their complex reproductive needs have attracted Beery’s attention. Each year, millions of salmon embark on a migration to spawn. Their journey begins in freshwater stream beds where the eggs hatch. Young salmon fry (newly hatched salmon) make their way to the ocean, where they spend several years maturing to adulthood. As adults, the salmon return to the streams where they were born in order to spawn, ensuring the continuation of their species by depositing their eggs in the gravel of the stream beds. Both male and female salmon die shortly after supplying the river habitat with the next generation of salmon. Throughout their migration, salmon support a wide range of organisms in the ecosystems they pass through. For example, salmon bring nutrients like carbon and nitrogen from the ocean upriver, enhancing their availability to those ecosystems. In addition, salmon are key to many predator-prey relationships: They serve as a food source for various predators, such as bears, wolves, and birds, while helping to control other populations, like insects, through predation. After they die from spawning, the decomposing salmon carcasses also replenish valuable nutrients to the surrounding ecosystem. The migration of salmon not only sustains their own species but plays a critical role in the overall health of the rivers and oceans they inhabit. At the same time, salmon populations play an important role both economically and culturally in the region. Commercial and recreational salmon fisheries contribute significantly to the local economy. And for many Indigenous peoples in the Pacific northwest, salmon hold notable cultural value, as they have been central to their diets, traditions, and ceremonies. Monitoring salmon migrationIncreased human activity, including overfishing and hydropower development, together with habitat loss and climate change, have had a significant impact on salmon populations in the region. As a result, effective monitoring and management of salmon fisheries is important to ensure balance among competing ecological, cultural, and human interests. Accurately counting salmon during their seasonal migration to their natal river to spawn is essential in order to track threatened populations, assess the success of recovery strategies, guide fishing season regulations, and support the management of both commercial and recreational fisheries. Precise population data help decision-makers employ the best strategies to safeguard the health of the ecosystem while accommodating human needs. Monitoring salmon migration is a labor-intensive and inefficient undertaking.Beery is currently leading a research project that aims to streamline salmon monitoring using cutting-edge computer vision methods. This project fits within Beery’s broader research interest, which focuses on the interdisciplinary space between artificial intelligence, the natural world, and sustainability. Its relevance to fisheries management made it a good fit for funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Beery’s 2023 J-WAFS seed grant was the first research funding she was awarded since joining the MIT faculty.  Historically, monitoring efforts relied on humans to manually count salmon from riverbanks using eyesight. In the past few decades, underwater sonar systems have been implemented to aid in counting the salmon. These sonar systems are essentially underwater video cameras, but they differ in that they use acoustics instead of light sensors to capture the presence of a fish. Use of this method requires people to set up a tent alongside the river to count salmon based on the output of a sonar camera that is hooked up to a laptop. While this system is an improvement to the original method of monitoring salmon by eyesight, it still relies significantly on human effort and is an arduous and time-consuming process. Automating salmon monitoring is necessary for better management of salmon fisheries. “We need these technological tools,” says Beery. “We can’t keep up with the demand of monitoring and understanding and studying these really complex ecosystems that we work in without some form of automation.”In order to automate counting of migrating salmon populations in the Pacific Northwest, the project team, including Justin Kay, a PhD student in EECS, has been collecting data in the form of videos from sonar cameras at different rivers. The team annotates a subset of the data to train the computer vision system to autonomously detect and count the fish as they migrate. Kay describes the process of how the model counts each migrating fish: “The computer vision algorithm is designed to locate a fish in the frame, draw a box around it, and then track it over time. If a fish is detected on one side of the screen and leaves on the other side of the screen, then we count it as moving upstream.” On rivers where the team has created training data for the system, it has produced strong results, with only 3 to 5 percent counting error. This is well below the target that the team and partnering stakeholders set of no more than a 10 percent counting error. Testing and deployment: Balancing human effort and use of automationThe researchers’ technology is being deployed to monitor the migration of salmon on the newly restored Klamath River. Four dams on the river were recently demolished, making it the largest dam removal project in U.S. history. The dams came down after a more than 20-year-long campaign to remove them, which was led by Klamath tribes, in collaboration with scientists, environmental organizations, and commercial fishermen. After the removal of the dams, 240 miles of the river now flow freely and nearly 800 square miles of habitat are accessible to salmon. Beery notes the almost immediate regeneration of salmon populations in the Klamath River: “I think it was within eight days of the dam coming down, they started seeing salmon actually migrate upriver beyond the dam.” In a collaboration with California Trout, the team is currently processing new data to adapt and create a customized model that can then be deployed to help count the newly migrating salmon.One challenge with the system revolves around training the model to accurately count the fish in unfamiliar environments with variations such as riverbed features, water clarity, and lighting conditions. These factors can significantly alter how the fish appear on the output of a sonar camera and confuse the computer model. When deployed in new rivers where no data have been collected before, like the Klamath, the performance of the system degrades and the margin of error increases substantially to 15-20 percent. The researchers constructed an automatic adaptation algorithm within the system to overcome this challenge and create a scalable system that can be deployed to any site without human intervention. This self-initializing technology works to automatically calibrate to the new conditions and environment to accurately count the migrating fish. In testing, the automatic adaptation algorithm was able to reduce the counting error down to the 10 to 15 percent range. The improvement in counting error with the self-initializing function means that the technology is closer to being deployable to new locations without much additional human effort. Enabling real-time management with the “Fishbox”Another challenge faced by the research team was the development of an efficient data infrastructure. In order to run the computer vision system, the video produced by sonar cameras must be delivered via the cloud or by manually mailing hard drives from a river site to the lab. These methods have notable drawbacks: a cloud-based approach is limited due to lack of internet connectivity in remote river site locations, and shipping the data introduces problems of delay. Instead of relying on these methods, the team has implemented a power-efficient computer, coined the “Fishbox,” that can be used in the field to perform the processing. The Fishbox consists of a small, lightweight computer with optimized software that fishery managers can plug into their existing laptops and sonar cameras. The system is then capable of running salmon counting models directly at the sonar sites without the need for internet connectivity. This allows managers to make hour-by-hour decisions, supporting more responsive, real-time management of salmon populations.Community developmentThe team is also working to bring a community together around monitoring for salmon fisheries management in the Pacific Northwest. “It’s just pretty exciting to have stakeholders who are enthusiastic about getting access to [our technology] as we get it to work and having a tighter integration and collaboration with them,” says Beery. “I think particularly when you’re working on food and water systems, you need direct collaboration to help facilitate impact, because you’re ensuring that what you develop is actually serving the needs of the people and organizations that you are helping to support.”This past June, Beery’s lab organized a workshop in Seattle that convened nongovernmental organizations, tribes, and state and federal departments of fish and wildlife to discuss the use of automated sonar systems to monitor and manage salmon populations. Kay notes that the workshop was an “awesome opportunity to have everybody sharing different ways that they’re using sonar and thinking about how the automated methods that we’re building could fit into that workflow.” The discussion continues now via a shared Slack channel created by the team, with over 50 participants. Convening this group is a significant achievement, as many of these organizations would not otherwise have had an opportunity to come together and collaborate. Looking forwardAs the team continues to tune the computer vision system, refine their technology, and engage with diverse stakeholders — from Indigenous communities to fishery managers — the project is poised to make significant improvements to the efficiency and accuracy of salmon monitoring and management in the region. And as Beery advances the work of her MIT group, the J-WAFS seed grant is helping to keep challenges such as fisheries management in her sights.  “The fact that the J-WAFS seed grant existed here at MIT enabled us to continue to work on this project when we moved here,” comments Beery, adding “it also expanded the scope of the project and allowed us to maintain active collaboration on what I think is a really important and impactful project.” As J-WAFS marks its 10th anniversary this year, the program aims to continue supporting and encouraging MIT faculty to pursue innovative projects that aim to advance knowledge and create practical solutions with real-world impacts on global water and food system challenges.  More

  • in

    Explained: Generative AI’s environmental impact

    In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.Demanding data centersThe electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.Increasing impacts from inferenceOnce a generative AI model is trained, the energy demands don’t disappear.Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.The computing hardware inside data centers brings its own, less direct environmental impacts.While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says. More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More

  • in

    In a unique research collaboration, students make the case for less e-waste

    Brought together as part of the Social and Ethical Responsibilities of Computing (SERC) initiative within the MIT Schwarzman College of Computing, a community of students known as SERC Scholars is collaborating to examine the most urgent problems humans face in the digital landscape.Each semester, students from all levels from across MIT are invited to join a different topical working group led by a SERC postdoctoral associate. Each group delves into a specific issue — such as surveillance or data ownership — culminating in a final project presented at the end of the term.Typically, students complete the program with hands-on experience conducting research in a new cross-disciplinary field. However, one group of undergraduate and graduate students recently had the unique opportunity to enhance their resume by becoming published authors of a case study about the environmental and climate justice implications of the electronics hardware life cycle.Although it’s not uncommon for graduate students to co-author case studies, it’s unusual for undergraduates to earn this opportunity — and for their audience to be other undergraduates around the world.“Our team was insanely interdisciplinary,” says Anastasia Dunca, a junior studying computer science and one of the co-authors. “I joined the SERC Scholars Program because I liked the idea of being part of a cohort from across MIT working on a project that utilized all of our skillsets. It also helps [undergraduates] learn the ins and outs of computing ethics research.”Case study co-author Jasmin Liu, an MBA student in the MIT Sloan School of Management, sees the program as a platform to learn about the intersection of technology, society, and ethics: “I met team members spanning computer science, urban planning, to art/culture/technology. I was excited to work with a diverse team because I know complex problems must be approached with many different perspectives. Combining my background in humanities and business with the expertise of others allowed us to be more innovative and comprehensive.”Christopher Rabe, a former SERC postdoc who facilitated the group, says, “I let the students take the lead on identifying the topic and conducting the research.” His goal for the group was to challenge students across disciplines to develop a working definition of climate justice.From mining to e-wasteThe SERC Scholars’ case study, “From Mining to E-waste: The Environmental and Climate Justice Implications of the Electronics Hardware Life Cycle,” was published by the MIT Case Studies in Social and Ethical Responsibilities of Computing.The ongoing case studies series, which releases new issues twice a year on an open-source platform, is enabling undergraduate instructors worldwide to incorporate research-based education materials on computing ethics into their existing class syllabi.This particular case study broke down the electronics life cycle from mining to manufacturing, usage, and disposal. It offered an in-depth look at how this cycle promotes inequity in the Global South. Mining for the average of 60 minerals that power everyday devices lead to illegal deforestation, compromising air quality in the Amazon, and triggering armed conflict in Congo. Manufacturing leads to proven health risks for both formal and informal workers, some of whom are child laborers.Life cycle assessment and circular economy are proposed as mechanisms for analyzing environmental and climate justice issues in the electronics life cycle. Rather than posing solutions, the case study offers readers entry points for further discussion and for assessing their own individual responsibility as producers of e-waste.Crufting and crafting a case studyDunca joined Rabe’s working group, intrigued by the invitation to conduct a rigorous literature review examining issues like data center resource and energy use, manufacturing waste, ethical issues with AI, and climate change. Rabe quickly realized that a common thread among all participants was an interest in understanding and reducing e-waste and its impact on the environment.“I came in with the idea of us co-authoring a case study,” Rabe said. However, the writing-intensive process was initially daunting to those students who were used to conducting applied research. Once Rabe created sub-groups with discrete tasks, the steps for researching, writing, and iterating a case study became more approachable.For Ellie Bultena, an undergraduate student studying linguistics and philosophy and a contributor to the study, that meant conducting field research on the loading dock of MIT’s Stata Center, where students and faculty go “crufting” through piles of clunky printers, broken computers, and used lab equipment discarded by the Institute’s labs, departments, and individual users.Although not a formally sanctioned activity on-campus, “crufting” is the act of gleaning usable parts from these junk piles to be repurposed into new equipment or art. Bultena’s respondents, who opted to be anonymous, said that MIT could do better when it comes to the amount of e-waste generated and suggested that formal strategies could be implemented to encourage community members to repair equipment more easily or recycle more formally.Rabe, now an education program director at the MIT Environmental Solutions Initiative, is hopeful that through the Zero-Carbon Campus Initiative, which commits MIT to eliminating all direct emissions by 2050, MIT will ultimately become a model for other higher education institutions.Although the group lacked the time and resources to travel to communities in the Global South that they profiled in their case study, members leaned into exhaustive secondary research, collecting data on how some countries are irresponsibly dumping e-waste. In contrast, others have developed alternative solutions that can be duplicated elsewhere and scaled.“We source materials, manufacture them, and then throw them away,” Lelia Hampton says. A PhD candidate in electrical engineering and computer science and another co-author, Hampton jumped at the opportunity to serve in a writing role, bringing together the sub-groups research findings. “I’d never written a case study, and it was exciting. Now I want to write 10 more.”The content directly informed Hampton’s dissertation research, which “looks at applying machine learning to climate justice issues such as urban heat islands.” She said that writing a case study that is accessible to general audiences upskilled her for the non-profit organization she’s determined to start. “It’s going to provide communities with free resources and data needed to understand how they are impacted by climate change and begin to advocate against injustice,” Hampton explains.Dunca, Liu, Rabe, Bultena, and Hampton are joined on the case study by fellow authors Mrinalini Singha, a graduate student in the Art, Culture, and Technology program; Sungmoon Lim, a graduate student in urban studies and planning and EECS; Lauren Higgins, an undergraduate majoring in political science; and Madeline Schlegal, a Northeastern University co-op student.Taking the case study to classrooms around the worldAlthough PhD candidates have contributed to previous case studies in the series, this publication is the first to be co-authored with MIT undergraduates. Like any other peer-reviewed journal, before publication, the SERC Scholars’ case study was anonymously reviewed by senior scholars drawn from various fields.The series editor, David Kaiser, also served as one of SERC’s inaugural associate deans and helped shape the program. “The case studies, by design, are short, easy to read, and don’t take up lots of time,” Kaiser explained. “They are gateways for students to explore, and instructors can cover a topic that has likely already been on their mind.” This semester, Kaiser, the Germeshausen Professor of the History of Science and a professor of physics, is teaching STS.004 (Intersections: Science, Technology, and the World), an undergraduate introduction to the field of science, technology, and society. The last month of the semester has been dedicated wholly to SERC case studies, one of which is: “From Mining to E-Waste.”Hampton was visibly moved to hear that the case study is being used at MIT but also by some of the 250,000 visitors to the SERC platform, many of whom are based in the Global South and directly impacted by the issues she and her cohort researched. “Many students are focused on climate, whether through computer science, data science, or mechanical engineering. I hope that this case study educates them on environmental and climate aspects of e-waste and computing.” More

  • in

    MIT delegation mainstreams biodiversity conservation at the UN Biodiversity Convention, COP16

    For the first time, MIT sent an organized engagement to the global Conference of the Parties for the Convention on Biological Diversity, which this year was held Oct. 21 to Nov. 1 in Cali, Colombia.The 10 delegates to COP16 included faculty, researchers, and students from the MIT Environmental Solutions Initiative (ESI), the Department of Electrical Engineering and Computer Science (EECS), the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Department of Urban Studies and Planning (DUSP), the Institute for Data, Systems, and Society (IDSS), and the Center for Sustainability Science and Strategy.In previous years, MIT faculty had participated sporadically in the discussions. This organized engagement, led by the ESI, is significant because it brought representatives from many of the groups working on biodiversity across the Institute; showcased the breadth of MIT’s research in more than 15 events including panels, roundtables, and keynote presentations across the Blue and Green Zones of the conference (with the Blue Zone representing the primary venue for the official negotiations and discussions and the Green Zone representing public events); and created an experiential learning opportunity for students who followed specific topics in the negotiations and throughout side events.The conference also gathered attendees from governments, nongovernmental organizations, businesses, other academic institutions, and practitioners focused on stopping global biodiversity loss and advancing the 23 goals of the Kunming-Montreal Global Biodiversity Framework (KMGBF), an international agreement adopted in 2022 to guide global efforts to protect and restore biodiversity through 2030.MIT’s involvement was particularly pronounced when addressing goals related to building coalitions of sub-national governments (targets 11, 12, 14); technology and AI for biodiversity conservation (targets 20 and 21); shaping equitable markets (targets 3, 11, and 19); and informing an action plan for Afro-descendant communities (targets 3, 10, and 22).Building coalitions of sub-national governmentsThe ESI’s Natural Climate Solutions (NCS) Program was able to support two separate coalitions of Latin American cities, namely the Coalition of Cities Against Illicit Economies in the Biogeographic Chocó Region and the Colombian Amazonian Cities coalition, who successfully signed declarations to advance specific targets of the KMGBF (the aforementioned targets 11, 12, 14).This was accomplished through roundtables and discussions where team members — including Marcela Angel, research program director at the MIT ESI; Angelica Mayolo, ESI Martin Luther King Fellow 2023-25; and Silvia Duque and Hannah Leung, MIT Master’s in City Planning students — presented a set of multi-scale actions including transnational strategies, recommendations to strengthen local and regional institutions, and community-based actions to promote the conservation of the Biogeographic Chocó as an ecological corridor.“There is an urgent need to deepen the relationship between academia and local governments of cities located in biodiversity hotspots,” said Angel. “Given the scale and unique conditions of Amazonian cities, pilot research projects present an opportunity to test and generate a proof of concept. These could generate catalytic information needed to scale up climate adaptation and conservation efforts in socially and ecologically sensitive contexts.”ESI’s research also provided key inputs for the creation of the Fund for the Biogeographic Chocó Region, a multi-donor fund launched within the framework of COP16 by a coalition composed of Colombia, Ecuador, Panamá, and Costa Rica. The fund aims to support biodiversity conservation, ecosystem restoration, climate change mitigation and adaptation, and sustainable development efforts across the region.Technology and AI for biodiversity conservationData, technology, and artificial intelligence are playing an increasing role in how we understand biodiversity and ecosystem change globally. Professor Sara Beery’s research group at MIT focuses on this intersection, developing AI methods that enable species and environmental monitoring at previously unprecedented spatial, temporal, and taxonomic scales.During the International Union of Biological Diversity Science-Policy Forum, the high-level COP16 segment focused on outlining recommendations from scientific and academic community, Beery spoke on a panel alongside María Cecilia Londoño, scientific information manager of the Humboldt Institute and co-chair of the Global Biodiversity Observations Network, and Josh Tewksbury, director of the Smithsonian Tropical Research Institute, among others, about how these technological advancements will help humanity achieve our biodiversity targets. The panel emphasized that AI innovation was needed, but with emphasis on direct human-AI partnership, AI capacity building, and the need for data and AI policy to ensure equity of access and benefit from these technologies.As a direct outcome of the session, for the first time, AI was emphasized in the statement on behalf of science and academia delivered by Hernando Garcia, director of the Humboldt Institute, and David Skorton, secretary general of the Smithsonian Institute, to the high-level segment of the COP16.That statement read, “To effectively address current and future challenges, urgent action is required in equity, governance, valuation, infrastructure, decolonization and policy frameworks around biodiversity data and artificial intelligence.”Beery also organized a panel at the GEOBON pavilion in the Blue Zone on Scaling Biodiversity Monitoring with AI, which brought together global leaders from AI research, infrastructure development, capacity and community building, and policy and regulation. The panel was initiated and experts selected from the participants at the recent Aspen Global Change Institute Workshop on Overcoming Barriers to Impact in AI for Biodiversity, co-organized by Beery.Shaping equitable marketsIn a side event co-hosted by the ESI with CAF-Development Bank of Latin America, researchers from ESI’s Natural Climate Solutions Program — including Marcela Angel; Angelica Mayolo; Jimena Muzio, ESI research associate; and Martin Perez Lara, ESI research affiliate and director for Forest Climate Solutions Impact and Monitoring at World Wide Fund for Nature of the U.S. — presented results of a study titled “Voluntary Carbon Markets for Social Impact: Comprehensive Assessment of the Role of Indigenous Peoples and Local Communities (IPLC) in Carbon Forestry Projects in Colombia.” The report highlighted the structural barriers that hinder effective participation of IPLC, and proposed a conceptual framework to assess IPLC engagement in voluntary carbon markets.Communicating these findings is important because the global carbon market has experienced a credibility crisis since 2023, influenced by critical assessments in academic literature, journalism questioning the quality of mitigation results, and persistent concerns about the engagement of private actors with IPLC. Nonetheless, carbon forestry projects have expanded rapidly in Indigenous, Afro-descendant, and local communities’ territories, and there is a need to assess the relationships between private actors and IPLC and to propose pathways for equitable participation. 

    Panelists pose at the equitable markets side event at the Latin American Pavilion in the Blue Zone.

    Previous item
    Next item

    The research presentation and subsequent panel with representatives of the association for Carbon Project Developers in Colombia Asocarbono, Fondo Acción, and CAF further discussed recommendations for all actors in the value chain of carbon certificates — including those focused on promoting equitable benefit-sharing and safeguarding compliance, increased accountability, enhanced governance structures, strengthened institutionality, and regulatory frameworks  — necessary to create an inclusive and transparent market.Informing an action plan for Afro-descendant communitiesThe Afro-Interamerican Forum on Climate Change (AIFCC), an international network working to highlight the critical role of Afro-descendant peoples in global climate action, was also present at COP16.At the Afro Summit, Mayolo presented key recommendations prepared collectively by the members of AIFCC to the technical secretariat of the Convention on Biological Diversity (CBD). The recommendations emphasize:creating financial tools for conservation and supporting Afro-descendant land rights;including a credit guarantee fund for countries that recognize Afro-descendant collective land titling and research on their contributions to biodiversity conservation;calling for increased representation of Afro-descendant communities in international policy forums;capacity-building for local governments; andstrategies for inclusive growth in green business and energy transition.These actions aim to promote inclusive and sustainable development for Afro-descendant populations.“Attending COP16 with a large group from MIT contributing knowledge and informed perspectives at 15 separate events was a privilege and honor,” says MIT ESI Director John E. Fernández. “This demonstrates the value of the ESI as a powerful research and convening body at MIT. Science is telling us unequivocally that climate change and biodiversity loss are the two greatest challenges that we face as a species and a planet. MIT has the capacity, expertise, and passion to address not only the former, but also the latter, and the ESI is committed to facilitating the very best contributions across the institute for the critical years that are ahead of us.”A fuller overview of the conference is available via The MIT Environmental Solutions Initiative’s Primer of COP16. More

  • in

    Advancing urban tree monitoring with AI-powered digital twins

    The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.A tree grows in Brooklyn (and many other places)The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.It’s a breezeWhile Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month.  More

  • in

    Nanoscale transistors could enable more efficient electronics

    Silicon transistors, which are used to amplify and switch signals, are a critical component in most electronic devices, from smartphones to automobiles. But silicon semiconductor technology is held back by a fundamental physical limit that prevents transistors from operating below a certain voltage.This limit, known as “Boltzmann tyranny,” hinders the energy efficiency of computers and other electronics, especially with the rapid development of artificial intelligence technologies that demand faster computation.In an effort to overcome this fundamental limit of silicon, MIT researchers fabricated a different type of three-dimensional transistor using a unique set of ultrathin semiconductor materials.Their devices, featuring vertical nanowires only a few nanometers wide, can deliver performance comparable to state-of-the-art silicon transistors while operating efficiently at much lower voltages than conventional devices.“This is a technology with the potential to replace silicon, so you could use it with all the functions that silicon currently has, but with much better energy efficiency,” says Yanjie Shao, an MIT postdoc and lead author of a paper on the new transistors.The transistors leverage quantum mechanical properties to simultaneously achieve low-voltage operation and high performance within an area of just a few square nanometers. Their extremely small size would enable more of these 3D transistors to be packed onto a computer chip, resulting in fast, powerful electronics that are also more energy-efficient.“With conventional physics, there is only so far you can go. The work of Yanjie shows that we can do better than that, but we have to use different physics. There are many challenges yet to be overcome for this approach to be commercial in the future, but conceptually, it really is a breakthrough,” says senior author Jesús del Alamo, the Donner Professor of Engineering in the MIT Department of Electrical Engineering and Computer Science (EECS).They are joined on the paper by Ju Li, the Tokyo Electric Power Company Professor in Nuclear Engineering and professor of materials science and engineering at MIT; EECS graduate student Hao Tang; MIT postdoc Baoming Wang; and professors Marco Pala and David Esseni of the University of Udine in Italy. The research appears today in Nature Electronics.Surpassing siliconIn electronic devices, silicon transistors often operate as switches. Applying a voltage to the transistor causes electrons to move over an energy barrier from one side to the other, switching the transistor from “off” to “on.” By switching, transistors represent binary digits to perform computation.A transistor’s switching slope reflects the sharpness of the “off” to “on” transition. The steeper the slope, the less voltage is needed to turn on the transistor and the greater its energy efficiency.But because of how electrons move across an energy barrier, Boltzmann tyranny requires a certain minimum voltage to switch the transistor at room temperature.To overcome the physical limit of silicon, the MIT researchers used a different set of semiconductor materials — gallium antimonide and indium arsenide — and designed their devices to leverage a unique phenomenon in quantum mechanics called quantum tunneling.Quantum tunneling is the ability of electrons to penetrate barriers. The researchers fabricated tunneling transistors, which leverage this property to encourage electrons to push through the energy barrier rather than going over it.“Now, you can turn the device on and off very easily,” Shao says.But while tunneling transistors can enable sharp switching slopes, they typically operate with low current, which hampers the performance of an electronic device. Higher current is necessary to create powerful transistor switches for demanding applications.Fine-grained fabricationUsing tools at MIT.nano, MIT’s state-of-the-art facility for nanoscale research, the engineers were able to carefully control the 3D geometry of their transistors, creating vertical nanowire heterostructures with a diameter of only 6 nanometers. They believe these are the smallest 3D transistors reported to date.Such precise engineering enabled them to achieve a sharp switching slope and high current simultaneously. This is possible because of a phenomenon called quantum confinement.Quantum confinement occurs when an electron is confined to a space that is so small that it can’t move around. When this happens, the effective mass of the electron and the properties of the material change, enabling stronger tunneling of the electron through a barrier.Because the transistors are so small, the researchers can engineer a very strong quantum confinement effect while also fabricating an extremely thin barrier.“We have a lot of flexibility to design these material heterostructures so we can achieve a very thin tunneling barrier, which enables us to get very high current,” Shao says.Precisely fabricating devices that were small enough to accomplish this was a major challenge.“We are really into single-nanometer dimensions with this work. Very few groups in the world can make good transistors in that range. Yanjie is extraordinarily capable to craft such well-functioning transistors that are so extremely small,” says del Alamo.When the researchers tested their devices, the sharpness of the switching slope was below the fundamental limit that can be achieved with conventional silicon transistors. Their devices also performed about 20 times better than similar tunneling transistors.“This is the first time we have been able to achieve such sharp switching steepness with this design,” Shao adds.The researchers are now striving to enhance their fabrication methods to make transistors more uniform across an entire chip. With such small devices, even a 1-nanometer variance can change the behavior of the electrons and affect device operation. They are also exploring vertical fin-shaped structures, in addition to vertical nanowire transistors, which could potentially improve the uniformity of devices on a chip.“This work definitively steps in the right direction, significantly improving the broken-gap tunnel field effect transistor (TFET) performance. It demonstrates steep-slope together with a record drive-current. It highlights the importance of small dimensions, extreme confinement, and low-defectivity materials and interfaces in the fabricated broken-gap TFET. These features have been realized through a well-mastered and nanometer-size-controlled process,” says Aryan Afzalian, a principal member of the technical staff at the nanoelectronics research organization imec, who was not involved with this work.This research is funded, in part, by Intel Corporation. More