cff-version: 1.2.0 abstract: "
This study explores the potential of ChatGPT, a large language model, in scientometrics by assessing its ability to predict citation counts, Mendeley readers, and social media engagement. In this study, 2222 abstracts from PLOS ONE articles published during the initial months of 2022 were analyzed using ChatGPT-4, which used a set of 60 criteria to assess each abstract. Using a principal component analysis, three components were identified: Quality and Reliability, Accessibility and Understandability, and Novelty and Engagement. The Accessibility and Understandability of the abstracts correlated with higher Mendeley readership, while Novelty and Engagement and Accessibility and Understandability were linked to citation counts (Dimensions, Scopus, Google Scholar) and social media attention. Quality and Reliability showed minimal correlation with citation and altmetrics outcomes. Finally, it was found that the predictive correlations of ChatGPT-based assessments surpassed traditional readability metrics. The findings highlight the potential of large language models in scientometrics and possibly pave the way for AI-assisted peer review.
" authors: - family-names: de Winter given-names: Joost orcid: "https://orcid.org/0000-0002-1281-8200" title: "Supplementary data for the paper 'Can ChatGPT be used to predict citation counts, readership, and social media interaction? An exploration among 2222 scientific abstracts'" keywords: version: 1 identifiers: - type: doi value: 10.4121/710585da-ed2e-4d36-b8e4-ad02c3af1e65.v1 license: CC BY 4.0 date-released: 2024-01-05