in

YouTube content on wildlife engages audiences but rarely drives meaningful conservation action


Abstract

Biodiversity loss is accelerating despite decades of conservation efforts, highlighting the need for new strategies to engage the public and influence policy. Digital platforms, particularly social media, offer powerful opportunities to shape conservation discourse at scale. Here, we analyze wildlife-related YouTube videos to assess dominant themes, audience engagement, and the frequency of conservation-related calls to action. We combined human-guided coding with machine learning to classify thousands of videos and associated comments in our sample. We find that appreciation for wildlife is the most common attitude, while explicit calls to action (e.g., “Contact your senator”) are uncommon. Conservation-themed videos represent a small share of wildlife content and compete with entertainment-based content. These findings highlight the need for conservationists to rethink their digital strategy, moving from awareness-only content to messages that foster deeper engagement and action. Our study illustrates how social media analytics can inform biodiversity conservation and broader sustainability goals.

Similar content being viewed by others

Framing of visual content shown on popular social media may affect viewers’ attitudes to threatened species

Integrating historical sources for long-term ecological knowledge and biodiversity conservation

Harnessing public perception of cheetah reintroduction in India to facilitate management through participatory decision-making

Introduction

Recent global assessments indicate that biodiversity is declining at an alarming rate1. Because these declines are driven by anthropogenic pressures2,3,4, changes in behavior and policy towards nature are needed to slow or reverse these losses. Achieving this will require changing attitudes towards wildlife, which in turn demands a deeper understanding of attitudes toward conservation and how wildlife is valued5,6. Although methods for understanding these attitudes at scale have been elusive, globally ubiquitous social media content offers an important opportunity to examine how the public perceives and engages with nature-related content7,8. Furthermore, the extensive reach and active participation of the public as both consumers and creators of digital content has substantial potential to shape public attitudes and amplify biodiversity conservation efforts via social media platforms9.

Assessing human perceptions of nature and biodiversity conservation, particularly the thematic framing of wildlife and human-nature relationships, has traditionally relied on public surveys10,11 and qualitative assessments12. However, there is a growing need for scalable methods that can analyze contemporary media and keep pace with its speed and the evolving dynamics of these issues13,14. In response to these demands, research increasingly uses online data sources, including social media platforms, to study human–nature interactions15,16,17. On average, working-age internet users (ages 16–64) spend over 2.5 h per day engaging with social media platforms18, and there is increasing recognition of its influence on values, attitudes, and knowledge acquisition19,20. Although the influence of social media on attitudes toward wildlife remains scarcely explored, broader research suggests that content creation supports online communities and raises awareness of environmental concerns, potentially reinforcing pro-environmental behavior19,20. Emerging research further indicates that viewing such content can strengthen positive attitudes towards wildlife21,22. However, whether this influence translates into passive appreciation or active conservation efforts remains widely debated20,22.

One major global platform that has received little attention in conservation is YouTube23. YouTube, an online video sharing platform, has over 2.5 billion monthly active users as of 202524. Wildlife is a prominent subject on YouTube, with the content category “pets and animals”, which includes wildlife, accounting for an estimated 6.4% of its videos25. The platform has an active community of users who post comments on content, which can allow nuanced analyses of attitudes toward wildlife videos26,27. The limited number of applied studies on YouTube stems from challenges in annotating its vast number of videos and related engagement content (e.g., comments, captions)28. Additionally, potential bias can arise when interpreting contextual nuances of video content29,30. Although large language models (LLMs) applied to caption text show promise in improving content extraction31,32, accurate classification often still requires supplementary manual context for precision33,34,35. For example, a video showing animals in an outdoor reserve might be misinterpreted as capturing wildlife in a natural habitat rather than a confined space. Similarly, capturing the themes and attitudes in comments can be challenging due to nuanced reactions and contextual factors15. Nevertheless, with targeted model development and human-guided concept extraction, these complexities can be addressed, enhancing the interpretation of the public’s interactions with wildlife videos.

Here, we examine key themes in wildlife-related content, how audiences engage with it, whether calls to action emerge, and how these are linked to expressions of appreciation or concern for wildlife and people. We present an analytical approach that leverages machine learning techniques to surface complex attitudes toward wildlife in YouTube content. Specifically, we (i) implement both supervised learning methods and qualitative coding to categorize wildlife-related videos, (ii) measure the popularity and user engagement across these videos, and (iii) develop methods to analyze public reactions and engagement through comments in response to the videos. We find that calls to action, attempts to motivate others to take action, regardless of whether that action is supportive or oppositional toward conservation, are relatively rare. Appreciation is the dominant attitude among viewers of YouTube wildlife videos, suggesting challenges in moving from entertainment to action. Methodologically, we find that capturing nuanced attitudes toward wildlife requires human qualitative coding that goes beyond off-the-shelf sentiment models.

Results

Overview of wildlife YouTube video content and themes

Manual annotation of diverse wildlife content on YouTube surfaced 7 salient wildlife themes within the open-access YouTube-8M dataset25. The results yielded 1751 usable videos featuring wildlife (Table 1; SI: Tab. 2). The most common videos with a definitive theme were hunting (17%), animals in captivity (15%) (e.g., in zoos and animal reserves), ‘wildlife viewing (14%) (e.g., encounters in a natural setting, camera traps), followed by content related to national parks (9%), safari (7%), conservation (6%), and educational content (6%). Comparing engagement across themes, we found that educational(μ = 20.12), animals in captivity(μ = 18.59), and conservation videos(μ = 14.32) received the most comments per video, suggesting strong viewer interaction (Table 1). Safari(μ = 2.9 M), animals in captivity (μ = 2.3 M), and educational videos (μ = 1.8 M) had the highest views, while captivity-related(μ = 23,730), educational(μ = 11,550), and conservation(μ = 8650) content received the most likes (Table 1), indicating varying audience preferences for viewing versus active engagement.

Table 1 Number of videos available for analysis on YouTube by wildlife theme
Full size table

Although the exact location of these videos was not always available, a search of the titles and location descriptions revealed that most originated in North America (n = 851; 47%), followed by Africa (n = 332; 19%) and Europe (n = 143; 8%). This suggests that English-language YouTube content creation is largely driven by individuals from relatively few locations, reflecting regionally specific attitudes toward wildlife and human–wildlife relationships, while it may be consumed and commented on by a more global audience. An assessment of the dominant species featured in the videos revealed that mammals (n = 1151; 66%) were the most common, followed by a group of videos showcasing various species (n = 276; 16%), birds (n = 211; 12%), and reptiles (n = 29; 2%). A qualitative assessment, supported by keyword analysis of video captions and titles(see section “Video Topic—Full Gold Set (VFGS)” of “Methods”), revealed that hunting videos focused predominantly on deer and turkey hunts, as well as fishing. Wildlife and ecosystem-related videos covered a range of content, from backyard encounters and wildlife camera footage (e.g., camera traps, first-hand recordings) to observations in primarily natural settings. Animals in captivity or zoo-related videos showcased charismatic species (e.g., African Big Five, large mammals of North America, and other iconic wildlife) often dominate in public imagery and conservation narratives (see Albert et al.36 for partial list). National park and conservation videos depicted various animals, often highlighting specific ecozones. Safari videos commonly featured “big ten” species and predator-prey interactions, capturing iconic wildlife encounters.

Comments regarding wildlife YouTube video

An analysis of all comments (n = 24,917) on videos with engagement (i.e., at least one English-language comment) showed that videos featuring animals in captivity received the most comments (Fig. 1), followed by wildlife viewing, hunting, educational content, safari footage, and national park videos. To further capture salient attitudes toward wildlife content, we developed a multi-label pipeline built on BERT37, which performed best in our setting. For each attitude (i.e., appreciation (wildlife; humans), concern (wildlife; humans)), we trained a separate BERT-based binary classifier, allowing a single comment to receive multiple attitude labels. Most comments expressed appreciation for wildlife (32%), followed by appreciation for humans (23%), concern for wildlife (7%), calls to action (2%), and concern for humans (0%) (Fig. 1; examples in SI: Tab. 3).

Fig. 1: The distribution of all comments across the relevant wildlife video themes and their corresponding breakdown by attitude.

The diagram represents how viewer engagement differs across themes such as hunting, educational content, wildlife appreciation, and animals in captivity, while also showing the proportion of attitudes, including appreciation, concern, and calls to action. Here we consider only comments in these main categories, and not the comments in the other category. A figure including all comments, including those classified as other is provided in SI: Fig. 1.

Full size image

These models were trained using annotations from a subset of 2778 comments verified by at least two out of three annotators (see SI: Tab 1). Independent annotators (n = 28) were trained to identify calls to action—explicit attempts to motivate others to act, whether supportive or oppositional toward conservation (e.g., “Contact your senator,” “We should do something about these wolves”), appreciation of wildlife and humans (i.e, expressions of reverence for the beauty and abilities of wildlife or humans), and concern for wildlife and humans (i.e., expressions of worry, distress, or fear for wildlife or human welfare). Results showed that appreciation for wildlife and humans was the most accurately predicted attitude, while concern for wildlife and calls to action posed the greatest challenge for accurate classification (see Table 3, “Methods”). See subsection “Classifying Comment Attitudes” for full model architecture and training details.

Further analysis across these wildlife video themes, breaking down percent of comments by video, showed that appreciation for wildlife was highest in videos featuring animals in captivity (i.e., zoos), consistently high across most themes, and was lower compared to appreciation for humans in hunting videos (Fig. 2). A qualitative review of a sample of these videos revealed that comments of videos about animals in captivity often focused on playful or funny wildlife moments, as well as appreciation for baby and charismatic animals. Together with the high number of appreciation comments, this suggests that the consumption of wildlife videos is primarily for entertainment purposes. The appreciation of humans was highest in videos themed conservation and hunting. However, the nature of this appreciation varied greatly (see SI, Tab. 3 for examples of comments). In conservation videos, it focused on prominent conservationists; and in hunting videos, comments often expressed admiration for hunting gear, as well as the skill and ingenuity of hunters. Attitudes of concern for wildlife and calls to action were highest in wildlife conservation videos, although these comments were relatively few compared to those expressing appreciation for animals and humans. This suggests that prominent conservationists may be successfully prompting calls to action through their content. Themes such as hunting, educational content, and animals in captivity also generated high rates of calls to action. However, in hunting videos, these calls to action were often polarized, either opposing hunting or advocating for hunting to manage wildlife (SI, Tab. 3). Concern for humans was minimal.

Fig. 2: The percentage of total for appreciation, concern for wildlife and humans, and calls to action across different wildlife video themes.

Confidence intervals are included to visualize the distribution and variability of these comments across videos.

Full size image

Attitudes of video comments for charismatic species

We finally evaluated charismatic species (see Albert et al.36) to determine their communicative relevance, labeling them according to keywords, and selecting the species most frequently mentioned when multiple were present (see section 5 in the SI for more details.). Charismatic species have long served as flagship symbols in conservation campaigns, valued for their esthetic appeal and their capacity to elicit strong emotional engagement36,38. On platforms like YouTube, their portrayal by content creators may amplify viewer calls to action, and influence audience responses to conservation messaging.

Our analysis of comments on top species-specific videos reveals that wolves are highly polarizing, generating a mix of concern and appreciation for both humans and animals, with a similar percentage of comments falling into these categories (Fig. 3). This likely reflects the ongoing debates surrounding their reintroduction in numerous regions39. In contrast, videos featuring cheetahs and gorillas predominantly receive comments expressing appreciation, often highlighting their majestic qualities, such as speed, strength, and attractiveness. Videos featuring rhinoceroses, lions, tigers, and wolves receive the highest percentage of comments related to calls to action.

Fig. 3: Heatmap showing the percentage of comments categorized as appreciation and concern for wildlife and humans, and calls to action for the most frequently featured charismatic species within that video.

Species were identified through keyword searches and manual verification, with primary focus on charismatic species as defined by ref. 36.

Full size image

Discussion

Our analysis of wildlife-related YouTube videos and viewer comments offers insights into how wildlife is thematically framed on digital platforms and the attitudes of audiences engaging with this content. We find that YouTube serves as a major source of engagement with wildlife, yet the representation of wildlife within it presents challenges for conservation efforts. The largest category of wildlife content we identified was animals in captivity, accounting for 30% of all videos, likely reflecting the most accessible form of public interaction with wildlife. In contrast, conservation videos made up just 9% of the total, occurring at nearly half the rate of hunting videos (17%). These findings suggest that viewers most often encounter wildlife in domesticated or other human-centered contexts that mirror the everyday lives and habits of typical content creators. This pattern is consistent with YouTube’s user-generated origins and creators’ intuitions about audience interest. Nevertheless, it is striking that such content outperforms polished, conservation-focused videos, even highly visible ones, which, given platform dynamics and monetization incentives, might be expected to perform better40.

A key question guiding this study was whether exposure to wildlife content on YouTube increases a sense of responsibility or motivation to protect wildlife. To explore this, we analyzed comments, classifying responses into appreciation and concern for wildlife, as well as explicit calls to action. The results indicate that while appreciation was the most prevalent attitudes (32% of all comments), expressions of concern (7%) and calls to action (2%) were comparatively rare. These findings challenge the assumption that exposure to wildlife content on social media naturally translates into conservation calls to action or advocacy22. Instead, viewers are primarily engaged in passive admiration, rather than actively supporting protective measures for wildlife.

Nevertheless, the presence of even a small number of calls to action suggests that YouTube can serve as a platform for conservation messaging and mobilization, given its wide viewership and near-global reach. Our findings indicate that concern and calls to action were more common in response to conservation-themed videos. However, given that such videos account for only 7% of all wildlife-related content, their overall impact may be limited by their lower visibility. This highlights a clear opportunity for conservation organizations, educators, and content creators to expand the presence that center pro-environmental and conservation content on YouTube.

Future efforts should explore strategies to improve the effectiveness of conservation messaging on social media41,42. Algorithmic approaches that increase the reach of conservation content, enhance the emotional and narrative appeal of videos, and stimulate interactive discussion that encourages civic participation may help translate passive appreciation (e.g., clicktivism, hashtag activism) into meaningful conservation action. Expanding the proportion of conservation-focused content and understanding the types of media that elicit stronger engagement could be essential steps in harnessing digital platforms for calls to action. To achieve this kind of active and meaningful engagement, several key questions require further investigation.

There is a pressing need to better understand the emotional pathways that lead to calls to action on digital platforms. While general appreciation for wildlife did not consistently prompt advocacy, content emphasizing animal welfare or controversial topics, such as wolf reintroduction or endangered charismatic species like rhinoceroses and elephants, was more likely to do so (SI, Tab. 3). Due to data limitations, we were unable to further analyze the tone within these messages. Behavioral science research suggests that hopeful messages can encourage small pro-environmental actions (e.g., recycling), whereas alarmist messages may be more effective in driving deeper behavioral changes needed to address biodiversity loss43. Future work should explore which emotional responses (e.g., empathy, outrage, awe) and which tones are most effective at motivating action44.

Researchers must critically examine how to interpret digital calls to action, especially as social media become a growing platform for public engagement with wildlife and conservation. Our work contributes to a small but growing body of research exploring how social media can facilitate collective action45,46,47. While some dismiss online advocacy as superficial or performative, this perspective may overlook genuine expressions of public sentiment. Research shows that online engagement often correlates with offline political behavior48,49,50, and this may extend to the wildlife context, where widespread appreciation suggests that pro-wildlife values are becoming more mainstream. However, the relative absence of explicit calls to action indicates that appreciation does not consistently translate into conservation-oriented advocacy, even among those who support such values.

This gap highlights a broader challenge: how to foster action without overemphasizing a single narrative. Doing so risks alienating broader coalitions that value wildlife, but engage differently51. At the same time, vocal minorities who advocate for action could represent a strategic lever for change. Political science research increasingly shows that environmental policy is often shaped by small but engaged constituencies52. A central challenge for conservation communication, then, is how to amplify meaningful engagement, particularly among these core advocates, while still creating space for more moderate or passive supporters to participate.

Another important area for future research lies in reconciling the growing public concern for animal welfare53 with the population-level focus of traditional, often utilitarian, wildlife conservation efforts. While conservation science typically prioritizes ecosystem- and species-level outcomes, animal welfare emphasizes the well-being of individual animals (e.g., popular zoo animals and live camera feeds of wild species). Qualitative analysis of comments revealed tensions between these perspectives.

Table 2 Example Comments from Dataset
Full size table

For instance, some commenters supported controlled hunting of wolves as a means of maintaining ecosystem balance and preventing starvation, while others condemned any form of lethal control on moral grounds. Similarly, in the case of rehabilitated elephants, some praised their release into the wild as restoring dignity and freedom, whereas others questioned the ethics of returning animals to environments where poaching risks remain high. Failing to acknowledge animal welfare concerns risks alienating an engaged and emotionally invested audience. Conservation practitioners must consider how to incorporate these concerns, not only to reduce animal suffering but also to build broader public support. This aligns with critiques of wildlife tourism, where entertainment value often overshadows animal well-being54.

Our analysis is subject to several limitations. First, to maintain classification accuracy, we excluded non-English comments, introducing an English-language bias that limits the global inclusivity of our findings. Second, although the videos represented a wide range of global locations, and our qualitative assessment found that the video titles were often in English, even when produced by non-native speakers, the dataset remains shaped by the geographic and political availability of YouTube, with notable gaps such as China, where the platform is restricted17. Third, while we successfully identified calls to action in viewer comments, we did not systematically assess whether such appeals were prompted by video content itself (e.g., creators urging viewers to donate or take action). Although we noted such instances during manual review, our initial efforts using machine learning to classify video-level features from titles and captions did not yield reliable categories. As such, this dimension was excluded from the model. Future work should prioritize the development of robust multilingual models and explore multimodal approaches to video-level analysis to better understand the relationship between content and audience response.

This paper highlights an emerging research agenda at the intersection of digital media and conservation, particularly how wildlife and nature are featured online. A key question remains: Can social media content effectively integrate emotional elements and catalyze calls to action into meaningful behavior, empowering both audiences and content creators-“especially those who already express care and concern for wildlife-“to take real, impactful steps in support of conservation? The methods presented here offer a framework not only for examining public engagement with biodiversity conservation but also for exploring broader sustainability challenges, including climate change.

Methods

To identify complex attitudes toward wildlife (concern, appreciation, and calls to action), our modeling approach relied on human-guided concept extraction informed by crowdsourced annotations. These attitudes span a range of emotions and are not easily reducible to the simple polarity or valence measurements of positive, negative, and neutral sentiments which are commonly utilized in text analysis. For example, concern may reflect anger or fear, appreciation may include multiple positive emotions, and calls to action often mix emotional tones and motivations that standard sentiment tools fail to capture. There is growing recognition that sentiment alone does not recover target-specific stance55, so our attitude-based approach better reflects public perspectives on conservation. We recorded valence during annotation but do not analyze it here, as pilot checks showed it was noisier and less reliable than labeling concern, appreciation, and calls to action.

Our primary data source is the YouTube-8M dataset25, a large-scale multi-label corpus comprising approximately 6.1 million public YouTube videos, annotated across 3862 Knowledge Graph classes, with an average of 3.0 labels per video. Each video is required to have at least 1000 views and duration between 120 and 500 s (i.e., 2−8.3 min). These criteria ensure that each video achieved a minimum level of audience exposure. From this dataset, we selected all videos tagged as wildlife, resulting in a subset of 3895 videos. A table describing the data processing pipeline is shown in SI 2 in the appendix. To analyze this large dataset, we produced and integrated three additional curated datasets:

  • COMMENT ATTITUDES – GOLD SET (CSGS): Manually annotated labels of the target attitudes (n = 2778): appreciation for wildlife, appreciation for humans, concern for wildlife, concern for humans, and call to action.

  • COMMENT ATTITUDES – FULL SET (CSFS): Machine-learning generated labels (n = 24,917) of the same concepts using the manually annotated labels as training data.

  • VIDEO TOPIC – FULL GOLD SET (VFGS): Manually annotated labels of the topics covered in the videos (n = 1751).

Our high-quality training dataset (CSGS) was used to train a series of BERT-based machine learning models37 for attitude classification. These models were then applied to produce the CSFS dataset. In total, we manually annotated 2778 comments from 301 videos and inferred attitude labels for 24,917 comments. Additionally, we created a second dataset of manually annotated video topics to better understand the thematic content. We annotated 2490 videos in total. Of these, 1751 videos (VFGS) contained at least one English-language comment and were included in our analysis.

Producing the comment attitude gold set

Annotators first determined whether a comment under a wildlife video conveyed appreciation, concern, or a call to action. If a comment expresses appreciation or concern, they further evaluate its intensity, as outlined in Table 2. For comments classified as calls to action, annotators specify whether the call was vague or specific. To ensure consistency and reliability, all annotators completed a structured training program before beginning the task.

Table 2 Question format for qualifying expressions of appreciation and concern
Full size table

The training program was designed to ensure annotators could reliably perform key classification tasks, each centered on a distinct concept: (1) identifying whether a video pertains to wildlife, (2) determining if a comment expresses appreciation or concern, and (3) recognizing whether a comment contains a call to action.

To facilitate this process, we first provided clear definitions of each concept, accompanied by both positive and negative examples. Annotators were then asked to complete a series of questions to assess their ability to detect and classify each concept (Table 2).

Following this assessment, we offered immediate feedback for each response. If an answer was correct, we reinforced the rationale behind it. If incorrect, we provided clarifications to enhance the annotators’ comprehension of the task. This iterative approach ensured consistency and reliability in annotations. During the annotation process annotators were given access to a reference document, which is shown in the supplement, along with additional details about the annotation process. The training quiz and annotation task were deemed exempt by the University of Michigan Institutional Review Board (IRB), HUM00249546.

The gold-standard dataset for comment attitude analysis was created by applying a majority voting approach to determine the final label for each comment. If two out of three annotators classified a comment as expressing concern for wildlife while one did not, the label for concern was assigned a value of 1. This approach ensured that attitude annotations reflected consensus among evaluators, reducing individual biases.

In total, we compiled a dataset of 2778 manually annotated comments, which served as high-quality training data for our machine learning models. These models were then trained on the annotated subset and applied to the remaining comments, enabling scalable and consistent attitude annotation while maintaining alignment with human judgment.

Classifying comment attitudes

We used the gold set annotations to systematically detect the presence of five target attitudes: concern for wildlife, appreciation for wildlife, concern for humans, appreciation for humans, and call to action. Each attitude was treated as an independent binary classification task, where a comment was either classified as expressing the attitude or not. That is, for a attitude type cj and given a dataset of ({mathbb{D}}={({x}_{1},{y}_{1}),ldots ,({x}_{N},{y}_{N})}) where ({y}^{{c}_{j}}) is a binary variable {1, 0} such that ({y}_{i}^{{c}_{j}}={mathbb{I}}({c}_{j},,{mbox{in comment i}})), and xi belongs to a multimodal feature space ({{mathcal{X}}}) which depends on the available data, our goal is to learn a function (f:{{mathcal{X}}}to {0,1}). This approach allowed us to scale attitude analysis across the dataset while maintaining alignment with expert-validated annotations.

To annotate the full dataset with respect to each of the attitudes above, we utilized pre-trained LLMs56, specifically BERT-base-uncased37, which has demonstrated strong performance in textual classification tasks.

We trained a separate BERT-based classifier for each attitude using stratified 5-fold cross-validation. Each comment was tokenized using the BERT tokenizer with a maximum sequence length of 128, and the labels were converted into a binary format, mapping attitude presence to 1 and absence to 0. Given the inherent class imbalance, we dynamically computed class weights during training to mitigate bias toward the majority class.

Each model was initialized with two output labels and optimized using AdamW with a learning rate of 2e-5 and a batch size of 32. We applied CrossEntropyLoss with computed class weights and incorporated gradient clipping (max norm = 1.0) to stabilize training. Additionally, we used a linear learning rate scheduler with warmup to enhance convergence. Training was carried out over five epochs, and model performance was evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. The best-performing model, based on F1-score, was saved for final testing (SI: Tab. 4).

The performance for all models on each of the attitude classification tasks is shown in Table 3. We see that the model was able to detect the attitudes of appreciation for both humans and wildlife with high precision and recall. The attitude of concern for wildlife is slightly more difficult, but the model still obtains reasonable performance.

Table 3 We report the precision, recall F1 score, AUC, class distribution, and Krippendorf’s alpha (Kα) for each attitude
Full size table

We observed a marked class imbalance for calls to action (≈8% positives), which lowers Krippendorff’s α for this category57 and makes detection of minority class harder. To compensate, we ran an expanded hyperparameter search for the call to action model, tuning epochs, batch size, learning rate, weight decay, and dropout (hidden layers and attention), to improve recall without excessive precision loss. For the other four attitudes, the imbalance was milder and the validation more stable, so we tuned only the epochs and batch size.

Video Topic – Full Gold Set (VFGS)

We extensively evaluated unsupervised methods for identifying latent video topics, yet none proved sufficiently effective for our research objectives. Instead, we opted for manual annotation, ensuring high-quality, interpretable labels.

Three trained annotators were each assigned one-third of the videos for topic classification. Annotators were instructed to review the video description and watch at least the first minute of each video. If the video’s theme remained unclear, they were encouraged to continue watching. This protocol was informed by our own extensive testing, which indicated that the first minute was generally sufficient for accurate annotation, while allowing flexibility for more ambiguous cases. To ensure consistency and accuracy, the research team held weekly meetings to create code books with the annotators, to discuss challenging cases, and refine classification criteria.

Each video was categorized into one of the following themes: Bird Songs, Safari, National Park, Hunting, Baby Animal, Conservation, Animals in Captivity, Educational, Photography, Wildlife and Ecosystem (No Human Interaction), Funny/Entertaining, Vlog, and Other. For analytical clarity, these categories were further consolidated into broader topics, representing 75% of the videos in VFGS, including: Animals in Captivity, Wildlife Viewing, Educational, Hunting, Safari, Conservation, and National Park.

Data availability

All processed data necessary to reproduce the figures are openly available in a Code Ocean compute capsule (58https://doi.org/10.24433/CO.1453230.v1). We used the YouTube-8M dataset for video-level labels and precomputed audio-visual features; YouTube-8M is publicly accessible. Raw YouTube video content and user comments are governed by YouTube’s Terms of Service; accordingly, we do not redistribute raw video or identifiable user data.

Code availability

All code necessary to reproduce the figures are openly available in a Code Ocean compute capsule (58https://doi.org/10.24433/CO.1453230.v1). No custom code was used to generate or process the data described in the manuscript.

References

  1. Ceballos, G. & Ehrlich, P. R. Mutilation of the tree of life via mass extinction of animal genera. Proc. Natl. Acad. Sci. USA 120, e2306987120 (2023).

    Article 

    Google Scholar 

  2. Maxwell, S. L., Fuller, R. A., Brooks, T. M. & Watson, J. E. Biodiversity: the ravages of guns, nets and bulldozers. Nature 536, 143–145 (2016).

    Article 

    Google Scholar 

  3. Ceballos, G., Ehrlich, P. R. & Dirzo, R. Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines. Proc. Natl. Acad. Sci. USA 114, E6089–E6096 (2017).

    Article 

    Google Scholar 

  4. Senior, R. A. et al. Global shortfalls in documented actions to conserve biodiversity. Nature 630, 387–391 (2024).

    Article 

    Google Scholar 

  5. IPBES. Summary for policymakers of the thematic assessment report on the underlying causes of biodiversity loss and the determinants of transformative change and options for achieving the 2050 vision for biodiversity of the intergovernmental science-policy platform on biodiversity and ecosystem services. IPBES secretariat, Bonn, Germany https://doi.org/10.5281/zenodo.11382230 (2024). Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES).

  6. Bosone, L., Chaurand, N. & Chevrier, M. To change or not to change? perceived psychological barriers to individuals’ behavioural changes in favour of biodiversity conservation. Ecosyst. People 18, 315–328 (2022).

    Article 

    Google Scholar 

  7. Correia, R. A. et al. Digital data sources and methods for conservation culturomics. Conserv. Biol. 35, 398–411 (2021).

    Article 

    Google Scholar 

  8. Reynolds, S. A. et al. The potential for ai to revolutionize conservation: a horizon scan. Trends Ecol. Evol. 40, 191–207 (2025).

    Article 

    Google Scholar 

  9. Correia, R. A., Guedes-Santos, J., Vardi, R. & Minin, E. D. Digital data sources and methods for conservation culturomics. Conserv. Biol. 35, 398–411 (2021).

    Article 

    Google Scholar 

  10. Berl, R. E., Sekar, S., Markevich, A., Camara, C. & Niemiec, R. M. Assessing the impacts of normative and efficacy-based messaging on the social diffusion of conservation science. Conserv. Sci. Pract. 4, e12647 (2022).

    Article 

    Google Scholar 

  11. Huber, B., Barnidge, M., Gil de Zúñiga, H. & Liu, J. Fostering public trust in science: the role of social media. Public Underst. Sci. 28, 759–777 (2019).

    Article 

    Google Scholar 

  12. Killion, A. K., Melvin, T., Lindquist, E. & Carter, N. H. Tracking a half century of media reporting on gray wolves. Conserv. Biol. 33, 645–654 (2019).

    Article 

    Google Scholar 

  13. Niemiec, R. et al. Rapid changes in public perception toward a conservation initiative. Conserv. Sci. Pract. 4, e12632 (2022).

    Article 

    Google Scholar 

  14. Rinne, J., Kulkarni, R., Soriano-Redondo, A., Correia, R. & Di Minin, E. Using automated content analysis to monitor global online trade in endemic reptile species. Diversity Distrib. 31, e13771 (2025).

    Article 

    Google Scholar 

  15. Hartmann, J., Heitmann, M., Siebert, C. & Schamp, C. More than a feeling: accuracy and application of sentiment analysis. Int. J. Res. Mark. 40, 75–87 (2023).

    Article 

    Google Scholar 

  16. Derrien, M. M. et al. Where wilderness is found: evidence from 70,000 trip reports. People Nat. 6, 202–219 (2024).

    Article 

    Google Scholar 

  17. Ghermandi, A. et al. Social media data for environmental sustainability: a critical review of opportunities, threats, and ethical use. One Earth 6, 236–250 (2023).

    Article 

    Google Scholar 

  18. DataReportal. Digital 2025: The state of social media in 2025 https://datareportal.com/reports/digital-2025-sub-section-state-of-social (2025).

  19. Carvalho, A. F., de Morais, I. O. B. & Souza, T. B. Profiting from cruelty: digital content creators abuse animals worldwide to incur profit. Biol. Conserv. 287, 110321 (2023).

    Article 

    Google Scholar 

  20. Shim, D. Personalising climate change—how activists from fridays for future visualise climate action on instagram. Human. Soc. Sci. Commun. 11, 1–9 (2024).

    Google Scholar 

  21. Ballejo, F., Plaza, P. I. & Lambertucci, S. A. Framing of visual content shown on popular social media may affect viewers’ attitudes to threatened species. Sci. Rep. 11, 13512 (2021).

    Article 

    Google Scholar 

  22. Bergman, J. N. et al. Evaluating the benefits and risks of social media for wildlife conservation. Facets 7, 360–397 (2022).

    Article 

    Google Scholar 

  23. Allgaier, J. Science and environmental communication on youtube: strategically distorted communications in online videos on climate change and climate engineering. Front. Commun. 4, 36 (2019).

    Article 

    Google Scholar 

  24. DataReportal. Youtube users, stats, data & trends for 2025. https://datareportal.com/essential-youtube-stats. accessed: April 8, 2025.

  25. Abu-El-Haija, S. et al. Youtube-8m: a large-scale video classification benchmark. Preprint at https://doi.org/10.48550/arXiv.1609.08675 (2016).

  26. Toivonen, T. et al. Social media data for conservation science: a methodological overview. Biol. Conserv. 233, 298–315 (2019).

    Article 

    Google Scholar 

  27. Cheng, X., Dale, C. & Liu, J. Statistics and social network of youtube videos. In 2008 16th Interntional Workshop on Quality of Service, 229–238 (IEEE, 2008).

  28. Fischer, O., Jeitziner, L. T. & Wulff, D. U. Affect in science communication: a data-driven analysis of ted talks on youtube. Human. Soc. Sci. Commun. 11, 1–9 (2024).

    Google Scholar 

  29. Real, E., Shlens, J., Mazzocchi, S., Pan, X. & Vanhoucke, V. Youtube-boundingboxes: A large high-precision human-annotated data set for object detection in video. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 5296–5305 (IEEE, 2017).

  30. Von Essen, E. et al. Wildlife in the digital anthropocene: examining human-animal relations through surveillance technologies. Environ. Plan. E: Nat. Space 6, 679–699 (2023).

    Google Scholar 

  31. Yue, L., Chen, W., Li, X., Zuo, W. & Yin, M. A survey of sentiment analysis in social media. Knowl. Inf. Syst. 60, 617–663 (2019).

    Article 

    Google Scholar 

  32. Wang, Y. et al. A systematic review on affective computing: emotion models, databases, and recent advances. Inf. Fusion 83, 19–52 (2022).

    Article 

    Google Scholar 

  33. Zhang, W., Deng, Y., Liu, B., Pan, S. J. & Bing, L. Sentiment analysis in the era of large language models: a reality check. In Findings of the Association for Computational Linguistics: NAACL 2024 pp. 3881–3906 (2024).

  34. Kikuchi, Y., Nishimura, I. & Sasaki, T. Wild birds in youtube videos: presence of specific species contributes to increased views. Ecol. Inform. 71, 101767 (2022).

    Article 

    Google Scholar 

  35. Moloney, G. K., Tuke, J., Dal Grande, E., Nielsen, T. & Chaber, A.-L. Is youtube promoting the exotic pet trade? analysis of the global public perception of popular youtube videos featuring threatened exotic animals. PLoS One 16, e0235451 (2021).

    Article 

    Google Scholar 

  36. Albert, C., Luque, G. M. & Courchamp, F. The twenty most charismatic species. PloS ONE 13, e0199149 (2018).

    Article 

    Google Scholar 

  37. Devlin, J., Chang, M., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 http://arxiv.org/abs/1810.04805 (2018).

  38. Smith, R. J., Veríssimo, D., Isaac, N. J. & Jones, K. E. Identifying cinderella species: uncovering mammals with conservation flagship appeal. Conserv. Lett. 5, 205–212 (2012).

    Article 

    Google Scholar 

  39. Williams, C. K., Ericsson, G. & Heberlein, T. A. A quantitative summary of attitudes toward wolves and their reintroduction (1972-2000). Wildlife Soc. Bull. 30, 575–584 (2002).

  40. Chu, A., Arunasalam, A., Ozmen, M. O. & Celik, Z. B. Behind the tube: Exploitative monetization of content on {YouTube}. In 31st USENIX Security Symposium (USENIX Security 22), 2171–2188 (2022).

  41. Silk, M., Correia, R., Veríssimo, D., Verma, A. & Crowley, S. L. (Eds.). Nature on screen: the implications of visual media for human-nature relationships. (2021).

  42. McCormack, C. M., K Martin, J. & Williams, K. J. The full story: understanding how films affect environmental change through the lens of narrative persuasion. People Nat. 3, 1193–1204 (2021).

    Article 

    Google Scholar 

  43. Clemm von Hohenberg, B. & Hager, A. Wolf attacks predict far-right voting. Proc. Natl. Acad. Sci. USA 119, e2202224119 (2022).

    Article 

    Google Scholar 

  44. Zelenski, J. M. & Desrochers, J. E. Can positive and self-transcendent emotions promote pro-environmental behavior? Curr. Opin. Psychol. 42, 31–35 (2021).

    Article 

    Google Scholar 

  45. Dekoninck, H. & Schmuck, D. The mobilizing power of influencers for pro-environmental behavior intentions and political participation. Environ. Commun. 16, 458–472 (2022).

    Article 

    Google Scholar 

  46. Boulianne, S. & Ohme, J. Pathways to environmental activism in four countries: social media, environmental concern, and political efficacy. J. Youth Stud. 25, 771–792 (2022).

    Article 

    Google Scholar 

  47. Pera, A. & Aiello, L. M. Narratives of collective action in youtube’s discourse on veganism. In Proc. International AAAI Conference on Web and Social Media, Vol. 18, 1220–1236 (2024).

  48. Settle, J. E. et al. From posting to voting: The effects of political competition on online political engagement. Polit. Sci. Res. Methods 4, 361–378 (2016).

    Article 

    Google Scholar 

  49. Kang, S. & Gearhart, S. E-government and civic engagement: how is citizens’ use of city web sites related with civic involvement and political behaviors? J. Broadcasting Electron. Media 54, 443–462 (2010).

    Article 

    Google Scholar 

  50. DiGrazia, J., McKelvey, K., Bollen, J. & Rojas, F. More tweets, more votes: Social media as a quantitative indicator of political behavior. PloS ONE 8, e79449 (2013).

    Article 

    Google Scholar 

  51. Allen, B. L. et al. Why humans kill animals and why we cannot avoid it. Sci. Total Environ. 896, 165283 (2023).

    Article 

    Google Scholar 

  52. Einstein, K. L., Palmer, M. & Glick, D. M. Who participates in local government? evidence from meeting minutes. Perspect. Polit. 17, 28–46 (2019).

    Article 

    Google Scholar 

  53. Greving, H. & Kimmerle, J. You poor little thing! the role of compassion for wildlife conservation. Hum. Dimens. Wildl. 26, 115–131 (2021).

    Article 

    Google Scholar 

  54. Moorhouse, T. P., Dahlsjö, C. A., Baker, S. E., D’Cruze, N. C. & Macdonald, D. W. The customer isn’t always right—conservation and animal welfare implications of the increasing demand for wildlife tourism. PloS ONE 10, e0138939 (2015).

    Article 

    Google Scholar 

  55. Bestvater, S. E. & Monroe, B. L. Sentiment is not stance: Target-aware opinion classification for political text analysis. Polit. Anal. 31, 235–256 (2023).

    Article 

    Google Scholar 

  56. Wang, H., Li, J., Wu, H., Hovy, E. & Sun, Y. Pre-trained language models and their applications. Engineering 25, 51–65 (2023).

    Article 

    Google Scholar 

  57. Jeni, L. A., Cohn, J. F. & De La Torre, F. Facing imbalanced data–recommendations for the use of performance metrics. In 2013 Humaine association conference on affective computing and intelligent interaction, 245–251 (IEEE, 2013).

  58. Van Berkel, D. et al. Youtube content on wildlife engages audiences but rarely drives meaningful conservation action. https://doi.org/10.24433/CO.1453230.v1.

Download references

Acknowledgements

E.D.M. was funded by the European Union (ERC, BIOBANG, 101171602). Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. E.D.M. would also like to thank the KONE Foundation under project 202309134.

Author information

Authors and Affiliations

Authors

Contributions

D.V.B.: Conceptualization; Methodology; Supervision; Project administration; Writing. N.G.: Data curation; modeling; Formal analysis; Validation; Visualization; Writing- review & editing. N.H.C: Conceptualization; Writing – review & editing. E.D.M.: Conceptualization; Writing – review & editing. Y.Z.: Data curation; Project administration; Formal analysis; Writing – review & editing. H.M.: Data curation; Formal analysis. S.Y.: Data curation; Formal analysis. S.T.: Conceptualization; Methodology; Modeling; Project administration; Supervision; Writing. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to
Derek Van Berkel.

Ethics declarations

Competing interests

The authors declare no competing interests

Peer review

Peer review information

Communications Sustainability thanks Grace Nolan and Marco Palomino, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editors: Yann Benetreau. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Transparent Peer Review file

Supplemental Material

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Cite this article

Van Berkel, D., Gautam, N., Carter, N.H. et al. YouTube content on wildlife engages audiences but rarely drives meaningful conservation action.
Commun. Sustain. 1, 26 (2026). https://doi.org/10.1038/s44458-025-00018-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • DOI: https://doi.org/10.1038/s44458-025-00018-2


Source: Ecology - nature.com

Mixed-source introductions successfully enhance the genetic diversity of captive forest musk deer (Moschus berezovskii)

Group-specific effect of interannual water level fluctuation on consumers trophic niche area

Back to Top