More stories

  • in

    Researchers in dismay as US exits world science body UNESCO … again

    The United States is once again withdrawing from the United Nations science and cultural organization UNESCO, ending its short two-year return to the agency. The decision by the US state department, announced on 22 July, will take effect on 31 December 2026.Researchers say that the US departure from UNESCO is a setback for global cooperation in science and education. The agency, which is headquartered in Paris and has offices in more than 50 countries, supports programmes on biodiversity, girls’ education, closing the gender gap in science and protecting natural heritage. Its work is especially important in low- and middle-income countries, where it also helps to train teachers and rebuild universities in countries experiencing wars, such as Lebanon and Ukraine.UNESCO also supports open science, and, in 2023, it released global guidelines on the use of generative artificial intelligence (AI) in education and research.Daniel Wagner, UNESCO chair in learning and literacy at the University of Pennsylvania in Philadelphia, says: “It’s never been wise to pull out of UNESCO, and now is particularly poor timing”.Biomedical scientist Peter Gluckman, president of the International Science Council, which works closely with UNESCO and is also based in Paris, agrees. At the end of this year, UNESCO’s member states will choose a new director-general to succeed Audrey Azoulay, formerly France’s culture minister. The United States will lose the opportunity to work with the organization’s new leader, says Gluckman.Barbara Finlayson-Pitts, an atmospheric chemist at the University of California, Irvine, says: “The US will be at a significant disadvantage with this withdrawal.” The move weakens the United States’s position in global discussions about crucial issues such as climate change, she adds.Wagner adds: “For generational challenges and opportunities such as AI adoption in education or improving literacy in low-income countries — areas in which the US is well-positioned to lead — we are, in effect, cutting off our nose to spite our face.”Not unexpectedThis decision was not a surprise. The White House announced in February that it was reviewing US membership of international agencies — in the case of UNESCO, it cited concerns about the organization’s failure to reform itself and its rhetoric against Israel.In a 22 July statement, the US administration also added the UN Sustainable Development Goals in its list of criticisms. The statement says: “UNESCO works to advance divisive social and cultural causes and maintains an outsized focus on the UN’s Sustainable Development Goals, a globalist, ideological agenda for international development at odds with our America First foreign policy.”Azoulay said in a statement that UNESCO was prepared for the US decision. The country last withdrew from UNESCO in 2017, during Trump’s first term, cutting off more than 22% of the agency’s funds. According to UNESCO, the latest withdrawal will not hit as hard as in 2017 because the US contribution now accounts for 8% of UNESCO’s current annual budget of US$900 million.Azoulay also said that the US’ claims contradict the reality of UNESCO’s efforts as the only UN agency responsible for Holocaust education and the fight against antisemitism. “We will continue to work hand in hand with all of our American partners in the private sector, academia and non-profit organizations,” she added in the statement.

    Login or create a free account to read this content

    Gain free access to this article, as well as selected content from this journal and more on nature.com

    Access through your institution

    or

    Sign in or create an account

    Continue with Google

    Continue with ORCiD More

  • in

    The deep sea is a globally connected habitat

    Rex, M. A. & Etter, R. J. Deep-Sea Biodiversity: Pattern and Scale (Harvard Univ. Press, 2010).
    Google Scholar 
    Ramirez-Llodra, E. et al. PLoS ONE 6, e22588 (2011).Article 
    PubMed 

    Google Scholar 
    O’Hara, T. D., Hugall, A. F., Woolley, S. N. C., Bribiesca-Contreras, G. & Bax, N. J. Nature 565, 636–639 (2019).Article 
    PubMed 

    Google Scholar 
    Stöhr, S., O’Hara, T. D. & Thuy, B. PLoS ONE 7, e31940 (2012).Article 
    PubMed 

    Google Scholar 
    Bribiesca-Contreras, G., Verbruggen, H., Hugall, A. F. & O’Hara, T. D. J. Biogeogr. 46, 1287–1299 (2019).Article 

    Google Scholar  More

  • in

    This ancient mega-predator was built for stealth

    Adaptations for stealth in the wing-like flippers of a large ichthyosaur The extinct marine mega-predator Temnodontosaurus had specialized adaptations to stealthily hunt its prey, suggests an analysis of a fossil flipper. Temnodontosaurus’s lifestyle has been a mystery due to a lack of preserved soft tissue, but fossil remains of a fore-fin have revealed several anatomical details that probably reduced low-frequency noise as the animal swam. The authors suggest that these adaptations show that Temnodontosaurus was a stealth predator.Hear more on the Nature Podcast. More

  • in

    Map endemic species before they vanish unrecorded

    For a very biodiverse nation, Peru has alarmingly patchy knowledge of its plants. Whereas regions such as Machu Picchu are well documented, vast corridors between the Andes and the Amazon Basin remain scientific blind spots.
    Competing Interests
    The author declares no competing interests. More

  • in

    Need to update your data? Follow these five tips

    Each week since 1977, researchers at the Portal Project have monitored how rodents, ants and plants interact with each other and respond to their climate on plots of land in Arizona. At first, the team shared those data informally. Then, beginning in the 2000s, the researchers would publish a data paper, wait several years and then publish a new one with combined old and new data to keep the information current.“Data collection is not a one-time effort,” says Ethan White, an environmental data scientist at the University of Florida in Gainesville, who began collaborating with the project in 2002. New tools have allowed the team to automate and modernize its strategy. In 2019, White and his colleagues developed a data workflow based on the code-sharing site GitHub, the data repository Zenodo and the software automation tool Travis CI, to keep their data current while preserving earlier versions (G. M. Yenni et al. PLoS Biol. 17, e3000125; 2019); so far, the Zenodo repository holds around 620 versions. “We wanted an approach that would let us update things more consistently, but in a way that if someone ever wanted to replicate a past analysis, they could go back and find the precise original data that we used.”Long-term ecological research is not the only area that needs to maintain and update data for future use. Many researchers add to, revise or overhaul their data sets over the course of their projects or careers, all while continuing to publish articles.But despite the need to update and preserve versions of data, there is little guidance for how to do so, says Crystal Lewis, a freelance data-management consultant in St. Louis, Missouri. “There are no standards for repositories; the journals are not telling you how to correct a data set or how to cite new data, so people are just winging it.”Good data-science practice can make the process more methodical. Here are five tips to help alter and cite data sets.Choose a repositoryAlthough it’s easy to place data on personal websites or in the cloud, using a repository is the simplest way for researchers to store, share and maintain multiple versions of their data, says Kristin Briney, a librarian at the California Institute of Technology in Pasadena, who helps researchers to manage their data. “It’ll get it out of the supplemental information; it’ll stop being shared upon request; it’ll stop being shared on personal websites,” on which it can be lost.By the end of this year, US federal funding agencies will require researchers to put data in a repository, with some agencies, including the National Institutes of Health, already implementing the policy. Some journals also require authors to use data repositories. PLoS ONE, for example, recommends several general and subject-specific repositories for its authors, including the Dryad Digital Repository and Open Science Framework.Challenge to scientists: does your ten-year-old code still run?A repository, or data archive, is more than just cloud storage. Repositories provide long-term storage with multiple backups. Zenodo, for example, says that data will be maintained as long as Europe’s particle-physics laboratory CERN, which runs the site, continues to exist. Generally, repositories also promise that archived data will remain unaltered and assign a persistent identifier to data sets so that others can find them.Briney suggests that researchers check whether their funding agency has specific recommendations. There might also be a particular repository for the type of data, such as GenBank for genetic sequences; or a discipline-specific repository for the field of study. Some universities offer institutional options, which usually have the added benefit of technical support. When there is no specific repository available, the non-profit organization the Gates Foundation in Seattle, Washington, recommends generalist repositories, such as Zenodo, Dataverse, Figshare and Dryad.Create multiple versionsFor transparency and accessibility, making a new version when data are added is essential. The alternative — overwriting the old data with the new — makes it impossible to repeat previous analyses or to see how the data have changed over time. Although best practice around creating versions and data alterations tends to focus on future users and scientific reproducibility, the real beneficiary is the researcher, says Lewis. “Three months from now, you will forget what you did — you will forget which version you’re working on, what changes you made to a data set. You are your biggest collaborator.”This is when data repositories come into their own, because many create new versions by default when data are added. Some repositories, such as Zenodo, also mint a digital object identifier (DOI) for each version automatically. “Since the very beginning, Zenodo has provided versionable data with individual DOIs that will take you to a specific version of the data, and also an overarching DOI that will link together all of those versions,” says White. That creates an umbrella link, as well as a mechanism to cite specific versions of the data.Managing versions without a repository is also possible. Researchers who store their data on GitHub, for instance, can use automation to create new ‘releases’ whenever they update their data. They can also create a version of the data set manually, using distinct file names, to differentiate these files from the earlier set, Briney says.Define file names and terminologyBriney regularly helps researchers to wrangle their data. Her favourite tips for data management are to establish a file naming convention, which includes the date (often given as YYYYMMDD or YYYY-MM-DD), and to store files in their correct folders. This is true whether you’re storing data locally or in remote repositories. “It takes 10 minutes to come up with a file-naming convention, everything gets organized, and that way you can tell related files apart,” she says. “It’s like putting your clothes away at the end of the day.”Briney also recommends documenting metadata, explaining the different variables used, and the location of data in the various files and folders. These practices “help you, but are also good for data sharing, because somebody else can pick up your spreadsheet” and understand it.Eleven tips for working with large data setsSabina Leonelli, who studies big-data methods at the Technical University of Munich in Germany, says that researchers should also explicitly document the terminology and queries used to generate and analyse their data. She gives an example of research using a biomedical database: “When you access certain databases, you frame your query” based on current definitions, she says. As knowledge develops, definitions shift and change, and if the specific definitions you used aren’t captured, she says, you might forget the query that originally shaped your data.Write a change log

    Login or create a free account to read this content

    Gain free access to this article, as well as selected content from this journal and more on nature.com

    Access through your institution

    or

    Sign in or create an account

    Continue with Google

    Continue with ORCiD More

  • in

    A motorcycle ride through the forest: how I protect Nigeria’s wildlife

    “I’m a botanist and wildlife expert working as a research coordinator in the Gashaka-Gumti National Park in eastern Nigeria. Unfortunately, the park has suffered from decades of illegal logging, poaching, uncontrolled grazing and bush clearing. But since 2017, the charity I work for, called Africa Nature Investors Foundation and based in Lagos, has been restoring the park as a haven for wildlife and indigenous plants, in partnership with Nigeria’s National Park Service.This photo was taken in March, at the end of Nigeria’s dry season. I was riding my motorcycle down a track in the heart of the forest, 15 minutes from our base camp. I saw a striped kingfisher (Halcyon chelicuti) in the afternoon light, and wanted to take a picture of the bird.People and dogs team up to protect sea turtles in Cabo Verde

    Login or create a free account to read this content

    Gain free access to this article, as well as selected content from this journal and more on nature.com

    Access through your institution

    or

    Sign in or create an account

    Continue with Google

    Continue with ORCiD More

  • in

    Three weeks in a hide to spot one elusive bear: the life of a wildlife film-maker

    Download Nature hits the books 11 July 2025Vianet Djenguet is an award-winning wildlife film-maker and camera operator whose work has featured in a number of major nature documentaries. In this podcast, Vianet joins us to talk about his career, how wildlife film-making have changed, and his experiences working with local researchers to capture footage of endangered animals on the new television series The Wild Ones.Music supplied by SPD/Triple Scoop Music/Getty ImagesNever miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Spotify, YouTube Music or your favourite podcast app. An RSS feed for the Nature Podcast is available too. More

  • in

    Will AI speed up literature reviews or derail them entirely?

    Over the past few decades, evidence synthesis has greatly increased the effectiveness of medicine and other fields. The process of systematically combining findings from multiple studies into comprehensive reviews helps researchers and policymakers to draw insights from the global literature1. AI promises to speed up parts of the process, including searching and filtering. It could also help researchers to detect problematic papers2. But in our view, other potential uses of AI mean that many of the approaches being developed won’t be sufficient to ensure that evidence syntheses remain reliable and responsive. In fact, we are concerned that the deployment of AI to generate fake papers presents an existential crisis for the field.What’s needed is a radically different approach — one that can respond to the updating and retracting of papers over time.We propose a network of continually updated evidence databases, hosted by diverse institutions as ‘living’ collections. AI could be used to help build the databases. And each database would hold findings relevant to a broad theme or subject, providing a resource for an unlimited number of ultra-rapid and robust individual reviews.Adding fuel to the fireCurrently, the gold standard for evidence synthesis is the systematic review. These are comprehensive, rigorous, transparent and objective, and aim to include as much relevant high-quality evidence as possible. They also use the best methods available for reducing bias. In part, this is achieved by getting multiple reviewers to screen the studies; declaring whatever criteria, databases, search terms and so on are used; and detailing any conflicts of interest or potential cognitive biases.Scientists are building giant ‘evidence banks’ to create policies that actually workYet these reviews require considerable resources. Some studies suggest that Cochrane reviews — systematic reviews of specific topics in health care and health policy that meet internationally recognized criteria for the highest standards in evidence-based health care — generally cost more than US$140,000 and take more than two years to complete3,4.It is becoming ever harder for review authors to keep up with the rapidly expanding number of papers. The scientific literature is estimated5 to have doubled every 14 years since 1952.Because each reviewer tends to have access to different publications, and because databases are continually updated, systematic reviews are plagued by reproducibility issues. A study published last year concludes that only 1% of reviews report a search strategy that is fully reproducible6. Furthermore, many systematic reviews unwittingly cite publications that have been retracted, including those removed from the literature because of methodological or ethical issues and fraud7.We agree that AI could be part of the solution to these problems. It could help investigators to conduct reviews more comprehensively and more efficiently — by filtering many more papers, say, or by assessing the entire content of papers instead of just the title and abstract, as human reviewers tend to do as a first step. But one aspect seems to be underappreciated: the degree to which AI — particularly large language models (LLMs) — could exacerbate some of the problems.At this point, little is known about how many scientific papers generated entirely by AI are being published. As announced in March, a scientific paper8 generated by AI Scientist (an AI tool developed by the company Sakana AI in Tokyo and its collaborators) passed peer review for inclusion in a workshop at a key AI meeting. The reviewers did not detect that an AI model had formulated the hypotheses, designed and run experiments, analysed the results, generated the figures and produced the manuscript.Policymakers in China are using evidence synthesis to guide management of the invasive grass Spartina alterniflora.Credit: SIPA Asia/ZUMA Press/AlamyAnd a preprint posted on arXiv estimates that at least 10% of all PubMed abstracts published in 2024 were written with the help of LLMs, on the basis that an abrupt increase in the frequency of certain words coincided with widespread access to LLMs9. That proportion has almost certainly gone up since.Decision makers need constantly updated evidence synthesisEven if LLMs are used widely, it is difficult to separate cases in which they have been deployed to fabricate papers from those in which authors are simply using them to improve their writing10. Yet generative AI is likely to make the production of fake manuscripts easier, irrespective of whether those who use LLMs maliciously do so to further their careers, to manipulate the conclusions of evidence syntheses because of a specific commercial or policy objective or simply to be disruptive. The use of multiple LLMs will also make it more difficult for humans to detect textual fingerprints associated with one particular model.In other words, the use of generative AI is likely to supercharge the already growing problem of paper mills — businesses that sell fake work and authorships to researchers seeking journal publications to boost their careers. It could even replace the paper-mill market, given that fake papers can now be generated in minutes for free.What to do?The Campbell Collaboration (a group of researchers and policymakers dedicated to generating evidence syntheses for economic and social policy decisions) and Cochrane already provide guidance on how to identify studies that have raised concerns or that have been retracted11. This includes checking studies against the Retraction Watch database, which lists retractions gathered from publisher websites, and using the CENTRAL database, a repository for clinical-trial reports that flags retracted studies11. Cochrane guidance also states that the authors of published reviews containing retracted studies should recalculate all results and, while doing so, flag the review with an editorial note or withdraw it and then publish the updated version11.Even now, this kind of reanalysis often fails to happen, presumably because the original review authors have limited resources and little incentive. In one assessment of systematic reviews of pharmaceutical compounds tested in clinical trials, retracted papers continued to be cited in 89% of the reviews one year after the review authors had been notified of the retraction7. With the ever-increasing production of both legitimate and spurious scientific literature, researchers’ ability to maintain an accurate picture of what the data show is likely to be outstripped (see ‘More papers, more retractions’).Source: Data from Open Alex (https://openalex.org/).So, what system might enable the continual and rapid removal — at scale — of fraudulent or otherwise problematic papers from databases?Although not developed with this goal in mind, our work on the Conservation Evidence project — an information resource hosted by the University of Cambridge, UK, to support decisions about how to maintain and restore global biodiversity — has convinced us that a network of AI-enabled, continually updated evidence databases is one possible solution.As part of this project, all of the authors of this article have been involved in developing subject-wide evidence synthesis. The aim here is to identify literature containing information that is relevant to a broad theme. For the Conservation Evidence project, this is the effectiveness of management actions for biodiversity conservation.Evidence synthesis needs greater incentives

    Login or create a free account to read this content

    Gain free access to this article, as well as selected content from this journal and more on nature.com

    Access through your institution

    or

    Sign in or create an account

    Continue with Google

    Continue with ORCiD More