Next Article in Journal
Sustainability 3.0 in Libraries: A Challenge for Management
Previous Article in Journal
Leveraging Open Tools to Realize the Potential of Self-Archiving: A Cohort Study in Clinical Trials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Research Impact Based on Semantic Scholar Highly Influential Citations, Total Citations, and Altmetric Attention Scores: The Quest for Refined Measures Remains Illusive

1
Community Health Nursing Department, School of Nursing, The University of Jordan, Amman 11942, Jordan
2
Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman 11942, Jordan
3
Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman 11942, Jordan
4
Lane Medical Library, Stanford University, Stanford, CA 94305, USA
5
Department of Medicine, Division of Endocrinology, Diabetes, and Metabolism, The University of Illinois at Chicago, Chicago, IL 60612, USA
6
School of Medicine, The University of Jordan, Amman 11942, Jordan
7
Department of Oral and Maxillofacial Surgery, Oral Medicine and Periodontology, School of Dentistry, The University of Jordan, Jordan University Hospital, Amman 11942, Jordan
8
Deanship of the Scientific Research, The University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
Publications 2023, 11(1), 5; https://doi.org/10.3390/publications11010005
Submission received: 5 September 2022 / Revised: 5 January 2023 / Accepted: 16 January 2023 / Published: 29 January 2023

Abstract

:
Background: The evaluation of scholarly articles’ impact has been heavily based on the citation metrics despite the limitations of this approach. Therefore, the quest for meticulous and refined measures to evaluate publications’ impact is warranted. Semantic Scholar (SS) is an artificial intelligence-based database that allegedly identifies influential citations defined as “Highly Influential Citations” (HICs). Citations are considered highly influential according to SS when the cited publication has a significant impact on the citing publication (i.e., the citer uses or extends the cited work). Altmetrics are measures of online attention to research mined from activity in online tools and environments. Aims: The current study aimed to explore whether SS HICs provide an added value when it comes to measuring research impact compared to total citation counts and Altmetric Attention Score (AAS). Methods: Dimensions was used to generate the dataset for this study, which included COVID-19-related scholarly articles published by researchers affiliated to Jordanian institutions. Altmetric Explorer was selected as an altmetrics harvesting tool, while Semantic Scholar was used to extract details related to HICs. A total of 618 publications comprised the final dataset. Results: Only 4.57% (413/9029) of the total SS citations compiled in this study were classified as SS HICs. Based on SS categories of citations intent, 2626 were background citations (29.08%, providing historical context, justification of importance, and/or additional information related to the cited paper), 358 were result citations (3.97%, that extend on findings from research that was previously conducted), and 263 were method citations (2.91%, that use the previously established procedures or experiments to determine whether the results are consistent with findings in related studies). No correlation was found between HICs and AAS (r = 0.094). Manual inspection of the results revealed substantial contradictions, flaws, and inconsistencies in the SS HICs tool. Conclusions: The use of SS HICs in gauging research impact is significantly limited due to the enigmatic method of its calculation and total dependence on artificial intelligence. Along with the already documented drawbacks of total citation counts and AASs, continuous evaluation of the existing tools and the conception of novel approaches are highly recommended to improve the reliability of publication impact assessment.

1. Introduction

The evaluation of a publication impact has been heavily dependent on the number of times it gets cited [1]. However, the citation count per se might not be a genuine and objective reflection of the publication’s scientific value, since it is often driven by the citation behavior [2,3]. A few examples include citation out of courtesy, negative citations, and self-citations [1,4,5]. Additionally, a citation study in the field of ethnobotany suggested that everything that is cited has not necessarily been read or may only have been read superficially [6]. Thus, the sole dependence on the citation count can be misleading, with impedance of literature searches and exaggeration of a publication impact [5]. Furthermore, it takes several years after publication for citation counts to give reliable and valid measurements of research impact [7]. According to [2], it is difficult to delineate other dimensions of research quality such as “solidity/plausibility, originality, and societal value” based on citation metrics.
However, while citation metrics are frequently over-relied on, they do function as measures of influence for those examining the relationship across fields and in longitudinal and large-scale studies. Additionally, citations have been over-emphasized on the individual or micro-level, even though bibliometrics scholars still regard citations as a ‘rough proxy’ of research impact. Thus, opponents of metric culture in academia are concerned with over-reliance on metrics without considering qualitative evaluation rather than the metrics themselves. It has been suggested that various motives behind citing behaviour (i.e., providing background information, giving homage to peers and pioneers, substantiating claims, criticizing previous work, and many other motives) are not so dissimilar that the phenomenon of citation loses its utility as a measure of influence [1].
Nevertheless, considering that citation counts may not be ideal as the sole indicator of research impact, and based on the aforementioned caveats of citation count as a measure of publications’ impact, the quest of improving the assessment of literature impact evolves continuously to address the limitations compromising the validity of traditional citation metrics [8,9].
Altmetrics is one of the suggested measures which assess the impact of a given research with an emphasis on the public engagement with the research output [10,11]. The Altmetrics weighted score is based on attention from various online sources manifested in the “Altmetric Attention Score (AAS)” [12]. Some studies have reported a correlation between Mendeley readership and future citation counts, and to some extent tweets and future citation counts. For instance, it has been shown that the more a social media community is dominated by people focusing on research, the higher the correlation between the corresponding altmetric and traditional citations, especially in biological and medical sciences [13,14]. Additionally, certain altmetrics such as tweets, Facebook wall posts, research highlights, blog mentions, mainstream media mentions, and forum posts have been shown to be associated with citation counts, at least in medical and biological sciences, and for articles with at least one altmetric mention [15]. Further, other studies showed that Mendeley readership scores are an effective tool to filter highly cited publications, as well as a potential evaluative tool that could be more useful than citation counts [14,16,17]. It is currently being investigated whether and how altmetrics could be used to assess the societal impact of research and create new indicators that utilize these new online data sources. The potential societal impacts of research are, however, often less tangible than the scientific impacts of research, which can be traced through citations [18]. Based on the aforementioned, altmetrics can provide an almost instantaneous glimpse of the attention a publication receives; nevertheless, the AAS does not necessarily reflect the publication quality [19].
A new approach to gauge scientific impact has emerged in 2015, which entails relying on artificial intelligence (AI) to capture the subset of an author’s or a paper’s influence with subsequent identification of the highly influential citations (HICs) [20]. The scientific search engine, Semantic Scholar (SS), is the first to automatically identify the subset of a paper’s citations in which the paper had a strong impact on the citing work (https://www.semanticscholar.org/, accessed on 1 December 2021). Semantic Scholar was developed by the Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, allowing the users to navigate through millions of scientific papers [21]. In contrast with Google Scholar and PubMed, SS claims to highlight the most important and influential elements of a paper [22]. The AI technology is designed to identify hidden connections and links between research topics [23]. In particular, SS uses natural language processing to understand when a paper is discussing its own results or those of another study [24].
Semantic Scholar tries to identify the intent of a citation in scientific papers by categorizing them into three different types: background information, use of methods, and comparing results [25]. To meet the growing need for researchers and institutions to show impact, SS emphasizes highly cited authors with influence scores: its signature metric, namely the “Highly Influential Citations” (HICs), total SS citations, a citations-per-year graph, and a citation velocity score [26]. The AI tool used by SS to identify HICs uses a classification approach to determine the importance of a citation. Based on features such as the number of citations and where in the body of the paper the article is cited, SS then classifies citations into either important citations (i.e., the citer uses or extends the cited work) or incidental citations (the citer uses the cited work for comparison, or the cited work is related to the subject of study) [26]. Figure 1 displays an example of this process. Given sufficient citations, author maps indicate those most influenced by an author and those with the greatest influence on an author. The reference list brings deeper meaning to citations by showing where and how often a reference is cited in the paper through a display of the semantic context or contexts [26]. Semantic Scholar attempts to combine the conventional citation metrics and Altmetrics with the “cited by” function seen elsewhere in Web of Science and Google Scholar, as well as links to tweets about citations.
In this study, we attempted to check the aforementioned claims to evaluate the reliability of SS HICs as a superior metric to show the publication impact compared to the traditional citation metrics relying solely on counts. In this attempt, we utilized a dataset including coronavirus disease 2019 (COVID-19)-related scholarly articles published by researchers affiliated to Jordanian institutions. We specifically aimed to provide a description of the bibliometric characteristics of the articles that scored the highest SS HICs in an effort to understand why the algorithm detects them as such. We also explored the correlation between the SS HICs and AAS of the retrieved records.
The rapid accumulation and growth of literature amid COVID-19 can provide a golden opportunity to track the publication impact almost instantaneously considering the rapid rate of growing literature tackling different aspects of the pandemic.
Findings from this study can highlight the need to remain vigilant about the best ways to disseminate the erudite work we are producing. Research, such as this study, will allow scholars to benchmark their progress as we adapt to the changing environment for measuring impact and quality in the digital age.

2. Materials and Methods

2.1. Search Methods and Outcome

2.1.1. Dimensions (Digital Science)

Digital Science’s Dimensions database was used to generate the dataset for this study [27]. The search was conducted on 31 January 2022 using keywords specific to COVID-19. The search strategy included the following terms: (“COVID-19” OR COVID OR COVID-19 OR “ncov 2019” OR “novel coronavirus” OR “SARS-CoV-2” OR “SARS-CoV-2” OR SARS-CoV-2 OR (wuhan AND coronavirus*) OR “corona virus*” OR “coronavirus disease 2019” OR “coronavirus disease 19” OR “2019ncov” OR “coronavirus 2”). The search results were limited to Jordan using the country/territory filter. The results were also filtered to publications from 2019 to 2022. The search resulted in 1149 records. The data were exported from Dimensions into Microsoft Excel (Microsoft Corporation) for screening. Excluded records included editorials, abstracts and conference proceedings, commentaries, letters to editors, viewpoints and opinions, preprints, and chapters. The final number of articles that were included in quantitative synthesis was 618 (Figure 2).

2.1.2. Altmetric Explorer

Altmetric Explorer (Altmetric LLP, London, UK) was searched on 9 April 2022 for all the 618 articles. Altmetric Explorer tracks the amount of attention that research outputs receive in online media platforms. The number of these mentions is then used to calculate the AAS using an automated algorithm that weights each mention type according to the relative reach of the source type. Out of the 618 articles’ Digital Object Identifiers (DOIs), 433 were tracked in Altmetric Explorer. The 185 articles that did not have any Altmetric data at the time of the search did not have any mentions to track. Altmetric Explorer tracks based on DOI, so the articles will eventually have Altmetric scores if they receive attention [12].

2.1.3. Semantic Scholar

Semantic Scholar was searched between 11 April 2022 and 16 April 2022 for each of the 618 articles individually. A deep semantic analytic engine underlines all SS indexed publications, and this helps to understand the meaning of a paper [24]. Natural language processing and machine learning models rank the results by relevance to help the researcher find the most up-to-date results [28]. Each paper hosted by SS is assigned a unique identifier called the SS Corpus ID (abbreviated S2CID). Semantic Scholar categorizes citation intents into three different types: (1) Background, (2) Method, and (3) Result Extension. Background citations provide historical context, justification of importance, and/or additional information directly related to that which exists in a cited paper. Method citations use the previously established procedures or experiments to determine whether the results are consistent with findings in related studies. Result citations extend on findings from research that was previously conducted [24]. For each of the included articles in this study, the following citation metrics were retrieved if available: total citations, HICs, and citations intent. Given the lack of automated methods or access point for retrieving these analytics, two authors (NS & NS) manually and independently extracted the data from Semantic Scholar. Authors compared and contrasted the retrieved metrics to ensure reliability.

2.1.4. Quality Appraisal

The first author (L.D.) and an academic health center reference librarian (A.W.) built the combination of index terms used according to the requirements of Dimensions database. Two authors (M.S. and N.S.) independently reviewed the search results to determine the eligibility of the records. The two researchers then discussed the included/excluded records and worked together to resolve discrepancies. The first author (L.D.) reviewed the final included articles to ensure eligibility. A structured study evaluation form was independently used by the three researchers to evaluate the elicited articles. The goal of the form was to minimize bias in locating, selecting, coding, and aggregating individual articles. Garrard’s matrix strategy was used for abstracting the selected 618 articles and building a database including the articles’ characteristics [29].

2.2. Why Dimensions, Altmetric Explorer and SS?

Dimensions was selected for its exhaustive coverage compared to Web of Science and Scopus. It draws together funded grants, clinical trials, publication records, patents, and policy, offering a way to track research from inception to its eventual impacts [30]. According to Singh et al., (2021) [30], about 99.11% and 96.61% of the journals indexed in Web of Science are also indexed in Scopus and Dimensions, respectively. Scopus has 96.42% of its indexed journals also covered by Dimensions. The Dimensions database has the most exhaustive journal coverage, with 82.22% more journals than Web of Science and 48.17% more journals than Scopus.
Altmetric Explorer reflects both the quantity (the higher attention, the higher score) and quality (weighted score per source) of attention received by published documents [31]. This tool was selected for this study as a source of Altmetric scores for several reasons: (a) it captures more alternative sources of impact than the other services; (b) it exhibits the highest percentage of papers by each metric [32]; (c) it has a better coverage of blogs, news, and tweets [33]; (d) it uses more URLs identifiers such as DOIs, repository handles, and landing pages, etc. [32]; (e) it normalizes collected data before analyzing them; (f) it disambiguates links to outputs and recognizes that various types of links are related to the same article; and (g) it reduces the possibility of scores manipulation by counting only one mention from each person per source. Overall, Altmetric Explorer is currently considered the service with the highest percentage of publications, surpassing the other tools for tracking alternative metrics in almost all the indicators [32]. However, due to the score being weighted per source, the AAS may be quite arbitrary despite not being easy to manipulate. Essentially, since the score can fluctuate or fall, altmetrics data may be irreproducible and unstable. Transparent documentation is necessary to track and explain these score changes, yet it is lacking. Indeed, some may argue that this indicator is far from transparent in and of itself. For instance, scores are rounded to integers and score modifiers are utilized, and only Altmetric.com determines the allocated weights and the derived score for each publication [34].
Semantic Scholar is a free, AI-powered search and discovery tool that helps researchers discover and understand scientific literature that is most relevant to their work. Semantic Scholar extracts critical details like methods and materials, and filter outputs based on topic, date of publication, author, and where published are built in. Semantic Scholar includes smart, contextual recommendations for further keyword filtering as well [5,21]. Semantic Scholar has more than 50 direct partnerships with publishers, data providers, and aggregators that provide SS with content from 500+ academic journals, university presses, and scholarly societies around the globe [24].

2.3. Data Abstraction and Statistical Analysis

Dimensions and Altmetrics data were downloaded in MS Excel sheet formats and then, along with Semantic Scholar data, imported into SAS 9.4 software (SAS Institute, Cary, NC, USA) for subsequent analyses. Descriptive statistics were used to detail the articles’ scores characteristics. Pearson’s correlation coefficient was used to analyze the relationship between scores as needed. Scatter plots were used to identify any non-linear associations. The interpretation of the Pearson’s correlation coefficient (r) was based on [35]. For positive correlations, r values of +1 were considered perfect, very strong correlations were considered for r of 0.800–0.999, moderate correlation for r 0.600–0.799, fair correlation for r 0.300–0.599, and poor correlation for r less than 0.300, with p < 0.050 as the level of statistical significance. In addition to the quantitative analyses, we also conducted a manual inspection and qualitative analysis to gain a better understanding of the scope of articles that had the highest influential citations. The primary study measures were: (1) total SS citations, (2) AAS, and (3) SS HICs.

3. Results

3.1. Characteristics of the Retrieved Publications

A total of 618 publications were included in the study as the final dataset. Per year, the scholarly articles were published in 2021 (n = 393, 63.6%), 2020 (n = 197, 31.9%), and 2022 (n = 28, 4.5%). The general features of the publications categorized based on the four main study measures are illustrated in (Table 1).
Of the 618 articles, there were 502 surveys, 30 qualitative studies, 25 retrospective studies, 16 systematic reviews, 13 meta-analyses, 10 prospective studies, 9 case reports, 5 randomized controlled trials, 5 case-control studies, and 3 longitudinal studies. Subjects of the retrieved studies were medical, psychological, marketing, economics, computing, linguistics, engineering, sociology, sport sciences, history, law, and educational disciplines. The majority (86%) were open access publications. In particular, 43% had a gold category, 25% bronze, 12% green, and 5% hybrid.
Supplementary Table S1 summarizes each article’s title, authors and their affiliations, year of publication, publishing journal, publisher, funder, type, bibliometric details, total SS citations, relative citation ratio scores (RCR, a weighted number of citations a paper receives to a comparison group within the same field), field citation ratio scores (FCR, a citation-based measure of scientific influence of one or more articles), AAS, and SS HICs. We also summarized the citation characteristics of all the 618 articles, including total SS citations, SS HICs, SS background citations, SS methods citations, SS results citations, and AASs.

3.2. SS HICs Characteristics and Correlation with AAS

The total number of SS citations received by all included publications was 9029, while the total SS HICs was 413. Based on SS categories of citations intent, 2626 were background citations (29.08%, providing “historical context, justification of importance, and/or additional information directly related to that which exists in a cited paper”), while 358 citations were result citations (3.97%, that “extend on findings from research that was previously conducted”), and 263 were method citations (2.91%, that “use the previously established procedures or experiments to determine whether the results are consistent with findings in related studies”) [24,25]. Interestingly, 5782 citations were not classified into these three SS citations categories (64.04%).
Next, the evaluation of SS HICs was conducted. The majority of included records lacked SS HICs (n = 460, 74.8%), followed by records with a single SS HIC (n = 84, 13.7%), 2–5 SS HICs (n = 55, 8.9%), 6–10 SS HICs (n = 9, 1.5%), and more than 10 SS HICs (n = 7, 1.1%).
We did not find a correlation between AAS and HICs (r = 0.094, Figure 3).

3.3. Bibliometric Characteristics of the Publications That Scored the Highest SS HICs

The mean score for influential citations was 0.672, ranging from zero to 21. Of the 615 articles that were found in SS, 460 (74.8%) had zero SS HICs. The best scoring article (SS HICs = 21) was a cross-sectional study published in 2020 exploring Jordanian dentists’ awareness, perception, and attitude regarding COVID-19 and infection control. This article had a total of 263 SS citations and an AAS of 4. Table 2 shows the top 10 included articles ranked based on SS HICs.

3.4. Validation of the HICs

In order to validate SS HICs, we manually inspected all HICs for the top 10 articles that received the highest HICs. A major finding in this study was revealing the lack of intelligence in the SS AI tool. Most of the HICs identified by the SS website can hardly be described as a proper citation, let alone being highly influential. Table 3 compares the total number of HICs originally detected by semantic scholar and the resulting values after manual inspection.
An example of incorrectly assigned HIC was a communication entitled “COVID-19: Present and future challenges for dental practice” [45]. The citing article [46] referred to this publication only a single time in the following context: “Steps were taken to decrease the impact, but it was inevitable. With current positive cases exceeding 54,000, the burden of disease was anticipated more here as compared to developed countries, due to lack of resources. Even developed nations were finding it difficult to address problems related to it [45]”. Notably, citation in this context was inaccurate and definitely not a highly influential citation, as the study included an overview of how the pandemic is affecting dental practice. The article did not discuss issues related to developed nations’ response to the pandemic.
Another example was based on the article entitled “Dentists’ awareness, perception, and attitude regarding COVID-19 and infection control: Cross-sectional study among Jordanian dentists” [36]. A study [47] that was considered highly influenced by the previous publication used it solely as follows: “Other dental surveys showed that despite the potential high risk for the infection, dentists showed amazingly low rates of COVID-19 in 2020: …, 10% in Spain [27]”. The classification of this inaccurate reference as an SS HIC shows that AI-based can be misleading at times. We believe that these citations were categorized as HIC likely because of their location in the article, but in these cases the algorithm failed.
A third example is for the article entitled “Distance learning in clinical medical education amid COVID-19 pandemic in Jordan: current situation, challenges, and perspectives” [38]. A study [48] that was considered highly influenced by the previous publication used it in its introduction as follows: “COVID-19 has been declared as a pandemic disease by the WHO on 11 March 2020 [38]”. Classifying this reference as HIC is clearly incorrect and serves as an example of how this tool can be inaccurate.
As mentioned before, the SS AI tool uses a classification approach to determine the importance of a citation based on features such as the number of citations. The manual inspection showed that several studies that were considered influenced by the top 10 articles have used the cited article several times in the introduction for being simply a related work. It seems that the AI considered the frequency of citation as indicative of influence while in fact they were incidental citations.

3.5. Characteristics of the Top 10 Publications Based on AAS

Among the total sample, the articles were mostly read on Mendeley (total readers = 45,280), followed by Twitter mentions (total mentions = 8832), and news outlets (total mentions = 374). In the top 10 list, the best-scoring article (AAS = 1269) included the development of an informatics tool to study post COVID-19 vaccine side effects [49]. The article appeared mostly on Twitter with a total of 2725 mentions. Only two publications [37,43] ranked among the top 10 based on SS HICs and were found in the top 10 AAS list. In addition, the same two article were the only publications among the top 10 total SS citations list as well. Full details of the articles in the top 10 AAS list are shown in (Table 4).

4. Discussion

The quest to evaluate the impact and quality of scholarly articles is important for several reasons. To mention a few, research grants, tenure and promotion in academia, and academic awards are considerably influenced by the impact and quality of the published research [57,58]. However, the concept of “publication impact” remains subjective and obscure with variable methods of measurement including different bibliometric analysis tools [59,60]. Additionally, publication impact is multifaceted and can be viewed from the academia perspective or in relation to the news value and its socio-economic effects on the community [58]. Barriers that hinder the process of assessing publication impact include: (1) time-lag between publishing and impact recognition; (2) costs of the assessment process which might outweigh the benefits; (3) absence of measurement units; (4) lack of assessment standardization; and (5) issues of contribution and attribution [61]. Therefore, the continuous assessment of the currently available tools to measure publication impact, and the aspiration for refined, reliable, and accurate approaches appears as an important research priority [58,62,63].
Several bibliometric measures have been proposed for the evaluation of publication and journal impact, with citation count as the most commonly used method [2,64]. However, the controversy surrounding the utility of citation-based metrics in the assessment of publication impact necessitates the contemplation of novel measures with exploration of their validity [2,58,62].
The main motivation of the current study was to evaluate the relevance of adopting a recent AI-based tool, namely SS HICs, for the evaluation of publication impact [24,25]. The idea of using novel indicators to measure the publication impact is tempting considering the previous evidence that the reliance on citation count solely can be misleading, as total citations can be seen as a reflection of publication utility rather than quality [65,66].
The extensive number of publications that were published amid the COVID-19 pandemic could represent a golden opportunity to track publication impact in real-time, albeit disruptive on bibliometric measures to some extent [67]. Thus, we used a dataset of COVID-19-related publications to evaluate the utility of SS HICs compared to the traditional citation metrics and Altmetrics.
According to Valenzuela et al., (2015) [26], important citations are based on using the cited publication or extending such work. Adopting this viewpoint indicates that a majority of publications that were included in this study were cited incidentally without having an important impact on the citing research. Specifically, only 4.57% (413/9029) of the total SS citations compiled in this study were classified as SS HICs. However, this inference cannot be made due to several significant SS shortcomings revealed in this study.
The major finding in this study might be the lack of evidence supporting an added value for SS HICs over traditional citation total counts as a reliable indicator for research impact. Despite the importance of exploring novel methods in the pursuit for valid and reliable tools to evaluate the publication impact, the dependence on the SS HICs tool should be undertaken with extreme caution based on the findings of this study, including the manual inspection of articles that scored the top SS HICs. One of the several drawbacks of the SS HICs tool is being based on AI, which is an extremely helpful scientific tool; it can construct programs (e.g., GPS, driving a vehicle, language translation, chatbots and beyond human performance at complex board games) that exhibit intelligence by using processes like those used by humans in the same tasks [68,69]. However, it can sometimes be “stupid” [70]. While inaccuracies in citation is a known limitation for blindly using total citation counts as indicator for research impact [6,71,72], this issue may lead to much more misleading conclusions when it comes to using SS HICs due to the significantly low number of SS HICs compared to total citations, as observed in this study.
There are several aspects regarding the measurement of SS HICs which lack clarity, including the algorithm used in this process. The importance of transparency in the identification of SS HICs should be highlighted considering the ambiguities related to the differentiation between incidental and important citations [26]. The importance of transparency to achieve scientific rigor and to allow the reproducibility of results in various settings should be emphasized in the case studied in the current work, namely the exact algorithm used to identify HICs. Researchers are currently developing alternative approaches to measuring research impact by collating citation counts. A recent example worth mentioning [73] involves a smart citation index called scite, which shows how a citation was used by displaying the surrounding textual context from the citing paper to help indicate whether the statement provides supporting or contrasting evidence for a referenced work, or simply mentions it.
The disparity between AAS and HICs, besides the weak correlation between AAS and total SS citations, was a noteworthy finding in this study that is consistent with other studies and that showed the inconsistencies and caveats of Altmetrics [74,75,76]. Specifically, the inspection of the article with the highest AAS in this study, Hajjo et al., (2021) [49], revealed that this high score was related to its capture and sharing from Twitter accounts with high numbers of followers. In addition, the news websites that covered the results of this publication mentioned the results in a negative context towards COVID-19 vaccination [77,78,79]. Thus, the dependence on AAS for assessment of publication impact can be limited by its relation to social media influence or misuse related to cherry picking, misquotation, and misinterpretation [80,81].
Finally, the current study should be interpreted in light of the following limitations: (1) Several aspects of SS algorithm to identify the HICs were obscure and correspondence with the SS website did not yield clear and unequivocal answers. One of these ambiguities was manifested in SS total citations that far exceeded the sum of background, methods, and results citations as shown in this study. (2) We used a convenient approach that limited the study to COVID-19-related publications with authors affiliated to Jordanian universities. The rapidity of publication growth amid COVID-19 could have impacted the citation behaviour and publication rigor due to the pressure of limited time to handle a very high number of manuscripts by various academic journals. Further, each discipline has its own particularities in scientific communication, affecting not only the structure of the scholarly outputs themselves but also the citation patterns (for example, there are clear differences between Clinical Medicine and Humanities publications at different levels). Therefore, we recommend further studies tackling publications in different subjects and preferably on a global scale and a longer timeline. (3) The SS tools depend on the website access to full text of the citing research, which is not feasible in all cases and could have resulted in selection bias.

5. Conclusions

For SS HICs to be a helpful asset regarding the evaluation of publication impact, its limitations mentioned in the current study must be addressed properly. The utility of SS HICs suffers from several drawbacks including lack of transparency regarding its measurement and the clear availability of citation context. Thus, SS metrics can be used at best as a surrogate marker of publication impact and continuous improvements are recommended to refine its value. In addition, new citation impact indicators should be introduced only if they provide an added value compared to the existing indicators, and more focus should be given to how these indicators are truly used in practice [82].
To conclude, we adapt a quote from Winston S. Churchill—the prime minister of Great Britain from 1940 to 1945—“it has been said that [total citations] is the worst form of [publication impact measurement] except for all those other forms that have been tried from time to time.…” [83]. Finally, the quest of finding the ideal single measure for publication impact can be tagged “mission impossible”.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/publications11010005/s1, Table S1: Complete dataset used for the analyses.

Author Contributions

Conceptualization, L.A.D.; M.S.; A.W. & F.A.S.; methodology, L.A.D.; M.S.; A.W.; N.S. (Nadia Sweis); N.S. (Narjes Sweis) & F.A.S.; software, A.W.; validation, L.A.D.; M.S. & A.W.; formal analysis, L.A.D. & M.S.; investigation, L.A.D.; M.S.; A.W.; N.S. (Nadia Sweis); & N.S. (Narjes Sweis); resources, A.W.; data curation, L.A.D. & M.S.; writing—original draft preparation, L.A.D.; M.S.; A.W.; N.S. (Nadia Sweis); N.S. (Narjes Sweis) & F.A.S.; writing—review and editing, L.A.D.; M.S.; A.W.; N.S. (Nadia Sweis); N.S. (Narjes Sweis) & F.A.S.; visualization, L.A.D. & M.S.; supervision, L.A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Data Availability Statement

The data supporting reported results can be found at: Supplementary Materials.

Conflicts of Interest

Malik Sallam was an author of two publications that were ranked among the top ten articles with highly influential citations included in this study. No conflict of interest has been declared by the other authors.

References

  1. Bornmann, L.; Daniel, H.D. What do citation counts measure? A review of studies on citing behavior. J. Doc. 2008, 64, 45–80. [Google Scholar] [CrossRef] [Green Version]
  2. Aksnes, D.W.; Langfeldt, L.; Wouters, P. Citations, citation indicators, and research quality: An overview of basic concepts and theories. SAGE Open 2019, 9, 2158244019829575. [Google Scholar] [CrossRef] [Green Version]
  3. van Wesel, M. Evaluation by citation: Trends in publication behavior, evaluation criteria, and the strive for high impact publications. Sci. Eng. Ethics 2016, 22, 199–225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Catalini, C.; Lacetera, N.; Oettl, A. The incidence and role of negative citations in science. Proc. Natl. Acad. Sci. USA 2015, 112, 13823–13826. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Etzioni, O. Artificial intelligence: AI zooms in on highly influential citations. Nature 2017, 547, 32. [Google Scholar] [CrossRef] [Green Version]
  6. Ramos, M.A.; Melo, J.G.; Albuquerque, U.P. Citation behavior in popular scientific papers: What is behind obscure citations? The case of ethnobotany. Scientometrics 2012, 92, 711–719. [Google Scholar] [CrossRef]
  7. Clermont, M.; Krolak, J.; Tunger, D. Does the citation period have any effect on the informative value of selected citation indicators in research evaluations? Scientometrics 2021, 126, 1019–1047. [Google Scholar] [CrossRef]
  8. Hutchins, B.I.; Yuan, X.; Anderson, J.M.; Santangelo, G.M. Relative Citation Ratio (RCR): A new metric that uses citation rates to measure influence at the article level. PLoS Biol. 2016, 14, e1002541. [Google Scholar] [CrossRef] [Green Version]
  9. Oliveira J e Silva, L.; Maldonado, G.; Brigham, T.; Mullan, A.F.; Utengen, A.; Cabrera, D. Evaluating scholars’ impact and influence: Cross-sectional study of the correlation between a novel social media–based score and an author-level citation metric. J. Med. Internet Res. 2021, 23, e28859. [Google Scholar] [CrossRef]
  10. Kwok, P. Research impact: Altmetrics make their mark. Nature 2013, 500, 491–493. [Google Scholar] [CrossRef]
  11. Patthi, B.; Prasad, M.; Gupta, R.; Singla, A.; Kumar, J.K.; Dhama, K.; Niraj, L.K. Altmetrics—A collated adjunct beyond citations for scholarly impact: A systematic review. J. Clin. Diagn. Res. JCDR 2017, 11, ZE16–ZE20. [Google Scholar] [CrossRef] [PubMed]
  12. Altmetrics. The Donut and Altmetric Attention Score. 2022. Available online: https://www.altmetric.com/about-our-data/the-donut-and-score/ (accessed on 11 August 2022).
  13. Bornmann, L. Alternative metrics in scientometrics: A meta-analysis of research into three altmetrics. Scientometrics 2015, 103, 1123–1144. [Google Scholar] [CrossRef] [Green Version]
  14. Thelwall, M. Early Mendeley readers correlate with later citation counts. Scientometrics 2018, 115, 1231–1240. [Google Scholar] [CrossRef]
  15. Thelwall, M.; Haustein, S.; Larivière, V.; Sugimoto, C.R. Do Altmetrics work? Twitter and ten other social web services. PLoS ONE 2013, 8, e64841. [Google Scholar] [CrossRef] [Green Version]
  16. Maflahi, N.; Thelwall, M. When are readership counts as useful as citation counts? Scopus versus Mendeley for LIS journals. J. Assoc. Inf. Sci. Technol. 2016, 67, 191–199. [Google Scholar] [CrossRef] [Green Version]
  17. Zahedi, Z.; Costas, R.; Wouters, P. Mendeley readership as a filtering tool to identify highly cited publications. J. Assoc. Inf. Sci. Technol. 2017, 68, 2511–2521. [Google Scholar] [CrossRef] [Green Version]
  18. Holmberg, K.; Bowman, S.; Bowman, T.; Didegah, F.; Kortelainen, T. What is societal impact and where do altmetrics fit into the equation? J. Altmetr. 2019, 2, 6. [Google Scholar] [CrossRef]
  19. Elmore, S.A. The Altmetric Attention Score: What does it mean and why should I care? Toxicol. Pathol. 2018, 46, 252–255. [Google Scholar] [CrossRef] [Green Version]
  20. Jones, N. Artificial-Intelligence Institute Launches Free Science Search Engine. 2015. Available online: https://www.nature.com/articles/nature.2015.18703 (accessed on 12 June 2022).
  21. Fricke, S. Semantic scholar. J. Med. Libr. Assoc. 2018, 106, 145–147. [Google Scholar] [CrossRef]
  22. Gusenbauer, M.; Haddaway, N.R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 2020, 11, 181–217. [Google Scholar] [CrossRef]
  23. Baykoucheva, S. Driving Science Information Discovery in the Digital Age, 1st ed.; Chandos Publishing: Oxford, UK, 2021. [Google Scholar]
  24. Semantic Scholar. Resources: Frequently Asked Questions. 2022. Available online: https://www.semanticscholar.org/faq (accessed on 11 June 2022).
  25. Semantic Scholar. Tutorials—Semantic Scholar. 2022. Available online: https://www.semanticscholar.org/product/tutorials (accessed on 11 June 2022).
  26. Valenzuela, M.; Ha, V.; Etzioni, O. Identifying meaningful citations. In Proceedings of the Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015. [Google Scholar]
  27. Digital Science & Research Solutions Inc. Dimensions. 2022. Available online: https://www.dimensions.ai/ (accessed on 8 April 2022).
  28. Turney, P. Allen Institute for Artificial Intelligence: Vision, Projects, Results. 2015. Available online: https://www.kiv.zcu.cz/tsd2015/content/doc/tsd2015-kn-turney.pdf (accessed on 11 June 2022).
  29. Garrard, J. Health Sciences Literature Review Made Easy: The Matrix Method, 4th ed.; Jones & Bartlett Learning: Burlington, MA, USA, 2014. [Google Scholar]
  30. Singh, V.K.; Singh, P.; Karmakar, M.; Leta, J.; Mayr, P. The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics 2021, 126, 5113–5142. [Google Scholar] [CrossRef]
  31. Costas, R.; Zahedi, Z.; Wouters, P. Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J. Assoc. Inf. Sci. Technol. 2015, 66, 2003–2019. [Google Scholar] [CrossRef] [Green Version]
  32. Ortega, J.L. Disciplinary differences of the impact of altmetric. FEMS Microbiol. Lett. 2018, 365, fny049. [Google Scholar] [CrossRef] [Green Version]
  33. Baessa, M.; Lery, T.; Grenz, D.; Vijayakumar, J.K. Connecting the pieces: Using ORCIDs to improve research impact and repositories [version 1; peer review: 2 approved]. F1000Research 2015, 4, 195. [Google Scholar] [CrossRef] [PubMed]
  34. Gumpenberger, C.; Glänzel, W.; Gorraiz, J. The ecstasy and the agony of the altmetric score. Scientometrics 2016, 108, 977–982. [Google Scholar] [CrossRef] [Green Version]
  35. Chan, Y.H. Biostatistics 104: Correlational analysis. Singap. Med. J. 2003, 44, 614–619. [Google Scholar]
  36. Khader, Y.; Al Nsour, M.; Al-Batayneh, O.B.; Saadeh, R.; Bashier, H.; Alfaqih, M.; Al-Azzam, S. Dentists’ awareness, perception, and attitude regarding COVID-19 and infection control: Cross-sectional study among Jordanian dentists. JMIR Public Health Surveill. 2020, 6, e18798. [Google Scholar] [CrossRef] [PubMed]
  37. Sallam, M. COVID-19 vaccine hesitancy worldwide: A concise systematic review of vaccine acceptance rates. Vaccines 2021, 9, 160. [Google Scholar] [CrossRef]
  38. Al-Balas, M.; Al-Balas, H.I.; Jaber, H.M.; Obeidat, K.; Al-Balas, H.; Aborajooh, E.A.; Al-Taher, R.; Al-Balas, B. Distance learning in clinical medical education amid COVID-19 pandemic in Jordan: Current situation, challenges, and perspectives. BMC Med. Educ. 2020, 20, 341. [Google Scholar] [CrossRef]
  39. Rabi, F.A.; Al Zoubi, M.S.; Kasasbeh, G.A.; Salameh, D.M.; Al-Nasser, A.D. SARS-CoV-2 and coronavirus disease 2019: What we know so far. Pathogens 2020, 9, 231. [Google Scholar] [CrossRef]
  40. Khasawneh, A.I.; Humeidan, A.A.; Alsulaiman, J.W.; Bloukh, S.; Ramadan, M.; Al-Shatanawi, T.N.; Awad, H.H.; Hijazi, W.Y.; Al-Kammash, K.R.; Obeidat, N.; et al. Medical students and COVID-19: Knowledge, attitudes, and precautionary measures. A descriptive study from Jordan. Front. Public Health 2020, 8, 253. [Google Scholar] [CrossRef] [PubMed]
  41. Almaiah, M.A.; Al-Khasawneh, A.; Althunibat, A. Exploring the critical challenges and factors influencing the E-learning system usage during COVID-19 pandemic. Educ. Inf. Technol. 2020, 25, 5261–5280. [Google Scholar] [CrossRef]
  42. Islam, M.T.; Sarkar, C.; El-Kersh, D.M.; Jamaddar, S.; Uddin, S.J.; Shilpi, J.A.; Mubarak, M.S. Mubarak. Natural products and their derivatives against coronavirus: A review of the non-clinical and pre-clinical data. Phytother. Res. PTR 2020, 34, 2471–2492. [Google Scholar] [CrossRef] [PubMed]
  43. Sallam, M.; Dababseh, D.; Eid, H.; Al-Mahzoum, K.; Al-Haidar, A.; Taim, D.; Yaseen, A.; Ababneh, N.; Bakri, F.; Mahafzah, A. High rates of COVID-19 vaccine hesitancy and its association with conspiracy beliefs: A study in Jordan and Kuwait among other arab countries. Vaccines 2021, 9, 42. [Google Scholar] [CrossRef]
  44. Abuhammad, S. Barriers to distance learning during the COVID-19 outbreak: A qualitative review from parents’ perspective. Heliyon 2020, 6, e05482. [Google Scholar] [CrossRef] [PubMed]
  45. Dar-Odeh, N.; Babkair, H.; Abu-Hammad, S.; Borzangy, S.; Abu-Hammad, A.; Abu-Hammad, O. COVID-19: Present and future challenges for dental practice. Int. J. Environ. Res. Public Health 2020, 17, 3151. [Google Scholar] [CrossRef] [PubMed]
  46. Khan, M.I.H.; Maqsood, M.; Kataria, M.A.; Sagheer, A.; Basit, A.; Khan, Z.A. Comparison of features of corona virus in confirmed and unconfirmed patients in Lahore. J. Rawalpindi Med. Coll. 2020, 24, 166–172. [Google Scholar] [CrossRef]
  47. Vuković, A.; Mandić-Rajčević, S.; Sava-Rosianu, R.; D Betancourt, M.; Xhajanka, E.; Hysenaj, N.; Bajric, E.; Zukanović, A.; Philippides, V.; Zosimas, M.; et al. Pediatric Dentists’ Service Provisions in South-East Europe during the first wave of COVID-19 epidemic: Lessons learned about preventive measures and personal protective equipment use. Int. J. Environ. Res. Public Health 2021, 18, 11795. [Google Scholar] [CrossRef]
  48. Ibrahim, N.K.; Al Raddadi, R.; AlDarmasi, M.; Al Ghamdi, A.; Gaddoury, M.; AlBar, H.M.; Ramadan, I.K. Medical students’ acceptance and perceptions of e-learning during the Covid-19 closure time in King Abdulaziz University, Jeddah. J. Infect. Public Health 2021, 14, 17–23. [Google Scholar] [CrossRef]
  49. Hajjo, R.; Sabbah, D.A.; Bardaweel, S.K.; Tropsha, A. Shedding the light on post-vaccine myocarditis and pericarditis in COVID-19 and non-COVID-19 vaccine recipients. Vaccines 2021, 9, 1186. [Google Scholar] [CrossRef]
  50. Yusef, D.; Hayajneh, W.; Awad, S.; Momany, S.; Khassawneh, B.; Samrah, S.; Obeidat, B.; Raffee, L.; Al-Faouri, I.; Issa, A.B.; et al. Large Outbreak of coronavirus disease among wedding attendees, Jordan. Emerg. Infect. Dis. 2020, 26, 2165–2167. [Google Scholar] [CrossRef]
  51. Glasbey, J.; Nepogodiev, D.; Simoes, J.; Omar, O.; Li, E.; Venn, M.; Buarque, I. Elective cancer surgery in COVID-19-free surgical pathways during the SARS-CoV-2 pandemic: An international, multicenter, comparative cohort study. J. Clin. Oncol. 2021, 39, 66–78. [Google Scholar] [CrossRef] [PubMed]
  52. Alafeef, M.; Dighe, K.; Moitra, P.; Pan, D. Rapid, ultrasensitive, and quantitative detection of SARS-CoV-2 using antisense oligonucleotides directed electrochemical biosensor chip. ACS Nano 2020, 14, 17028–17045. [Google Scholar] [CrossRef]
  53. Qunaibi, E.A.; Helmy, M.; Basheti, I.; Sultan, I. A high rate of COVID-19 vaccine hesitancy in a large-scale survey on Arabs. eLife 2021, 10, e68038. [Google Scholar] [CrossRef] [PubMed]
  54. Alzoughool, F.; Alanagreh, L. Coronavirus drugs: Using plasma from recovered patients as a treatment for COVID-19. Int. J. Risk Saf. Med. 2020, 31, 47–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Abu-Rumaileh, M.A.; Gharaibeh, A.M.; Gharaibeh, N.E. COVID-19 vaccine and hyperosmolar hyperglycemic state. Cureus 2021, 13, e14125. [Google Scholar] [CrossRef] [PubMed]
  56. Huy, N.T.; Chico, R.M.; Huan, V.T.; Shaikhkhalil, H.W.; Uyen, V.N.T.; Qarawi, A.T.A.; Alhady, S.T.M.; Vuong, N.L.; Van Truong, L.; Luu, M.N.; et al. Awareness and preparedness of healthcare workers against the first wave of the COVID-19 pandemic: A cross-sectional survey across 57 countries. PLoS ONE 2021, 16, e0258348. [Google Scholar] [CrossRef]
  57. Carpenter, C.R.; Cone, D.C.; Sarli, C.C. Using publication metrics to highlight academic productivity and research impact. Acad. Emerg. Med. 2014, 21, 1160–1172. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Penfield, T.; Baker, M.J.; Scoble, R.; Wykes, M.C. Assessment, evaluations, and definitions of research impact: A review. Res. Eval. 2014, 23, 21–32. [Google Scholar] [CrossRef] [Green Version]
  59. Chavda, J.; Patel, A. Measuring research impact: Bibliometrics, social media, altmetrics, and the BJGP. Br. J. Gen. Pract. 2016, 66, e59–e61. [Google Scholar] [CrossRef] [Green Version]
  60. Morales, E.; McKiernan, E.C.; Niles, M.T.; Schimanski, L.; Alperin, J.P. How faculty define quality, prestige, and impact of academic journals. PLoS ONE 2021, 16, e0257340. [Google Scholar] [CrossRef] [PubMed]
  61. Adam, P.; Ovseiko, P.V.; Grant, J.; Graham, K.E.; Boukhris, O.F.; Dowd, A.M.; Balling, G.V.; Christensen, R.N.; Pollitt, A.; Taylor, M.; et al. ISRIA statement: Ten-point guidelines for an effective process of research impact assessment. Health Res. Policy Syst. 2018, 16, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Bornmann, L.; Marx, W. How good is research really? EMBO Rep. 2013, 14, 226–230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Searles, A.; Doran, C.; Attia, J.; Knight, D.; Wiggers, J.; Deeming, S.; Nilsson, M. An approach to measuring and encouraging research translation and research impact. Health Res. Policy Syst. 2016, 14, 60. [Google Scholar] [CrossRef] [Green Version]
  64. Bollen, J.; Van de Sompel, H.; Hagberg, A.; Chute, R. A principal component analysis of 39 scientific impact measures. PLoS ONE 2009, 4, e6022. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Aragón, A. A measure for the impact of research. Sci. Rep. 2013, 3, 1649. [Google Scholar] [CrossRef] [Green Version]
  66. Szomszor, M.; Pendlebury, D.A.; Adams, J. How much is too much? The difference between research influence and self-citation excess. Scientometrics 2020, 123, 1119–1147. [Google Scholar] [CrossRef]
  67. Fassin, Y. Research on Covid-19: A disruptive phenomenon for bibliometrics. Scientometrics 2021, 126, 5305–5319. [Google Scholar] [CrossRef]
  68. Nichols, J.A.; Herbert Chan, H.W.; Baker, M.A.B. Machine learning: Applications of artificial intelligence to imaging and diagnosis. Biophys. Rev. 2019, 11, 111–118. [Google Scholar] [CrossRef]
  69. Simon, H.A. Artificial intelligence: An empirical science. Artif. Intell. 1995, 77, 95–127. [Google Scholar] [CrossRef] [Green Version]
  70. Bishop, J. Artificial intelligence is stupid and causal reasoning will not fix it. Front. Psychol. 2021, 11, 513474. [Google Scholar] [CrossRef] [PubMed]
  71. Jergas, H.; Baethge, C. Quotation accuracy in medical journal articles—A systematic review and meta-analysis. PeerJ 2015, 3, e1364. [Google Scholar] [CrossRef]
  72. van de Weert, M.; Stella, L. The dangers of citing papers you did not read or understand. J. Mol. Struct. 2019, 1186, 102–103. [Google Scholar] [CrossRef] [Green Version]
  73. Nicholson, J.M.; Mordaunt, M.; Lopez, P.; Uppala, A.; Rosati, D.; Rodrigues, N.P.; Grabitz, P.; Rife, S.C. Scite: A smart citation index that displays the context of citations and classifies their intent using deep learning. Quant. Sci. Stud. 2021, 2, 882–898. [Google Scholar] [CrossRef]
  74. Erdt, M.; Nagarajan, A.; Sin, S.C.J.; Theng, Y.L. Altmetrics: An analysis of the state-of-the-art in measuring research impact on social media. Scientometrics 2016, 109, 1117–1166. [Google Scholar] [CrossRef]
  75. Meschede, C.; Siebenlist, T. Cross-metric compatability and inconsistencies of altmetrics. Scientometrics 2018, 115, 283–297. [Google Scholar] [CrossRef]
  76. Mukherjee, B.; Subotić, S.; Chaubey, A.K. And now for something completely different: The congruence of the Altmetric Attention Score’s structure between different article groups. Scientometrics 2018, 114, 253–275. [Google Scholar] [CrossRef]
  77. Altmetric News Mentions. Overview of Attention for Article Published in Vaccines, October 2021: Shedding the Light on Post-Vaccine Myocarditis and Pericarditis in COVID-19 and Non-COVID-19 Vaccine Recipients. 2022. Available online: https://mdpi.altmetric.com/details/115359251/news (accessed on 29 August 2022).
  78. Altmetric Twitter Mentions. Overview of Attention for Article Published in Vaccines, October 2021: Shedding the Light on Post-Vaccine Myocarditis and Pericarditis in COVID-19 and Non-COVID-19 Vaccine Recipients. 2022. Available online: https://mdpi.altmetric.com/details/115359251/twitter (accessed on 29 August 2022).
  79. Children’s Health Defense. Vaccine-Induced Myocarditis Injuring Record Number of Young People. Will Shots Also Bankrupt Families? 2022. Available online: https://www.globalresearch.ca/vaccine-induced-myocarditis-injuring-record-number-young-people-shots-bankrupt-families/5768958 (accessed on 29 August 2022).
  80. Crotty, D. Altmetrics. Eur. Heart J. 2017, 38, 2647–2648. [Google Scholar] [CrossRef]
  81. García-Villar, C. A critical review on altmetrics: Can we measure the social impact factor? Insights Imaging 2021, 12, 92. [Google Scholar] [CrossRef]
  82. Waltman, L. A review of the literature on citation impact indicators. J. Informetr. 2016, 10, 365–391. [Google Scholar] [CrossRef]
  83. The International Churchill Society. Quotes: The Worst Form of Government. 2016. Available online: https://winstonchurchill.org/resources/quotes/the-worst-form-of-government/ (accessed on 5 September 2022).
Figure 1. Semantic scholar process of identifying influential citations with examples listed in increasing order of importance (adapted from [26]).
Figure 1. Semantic scholar process of identifying influential citations with examples listed in increasing order of importance (adapted from [26]).
Publications 11 00005 g001
Figure 2. Flowchart for the search process and study selection.
Figure 2. Flowchart for the search process and study selection.
Publications 11 00005 g002
Figure 3. The lack of correlation between Semantic Scholar Highly Influential Citations and the Altmetric Attention Scores.
Figure 3. The lack of correlation between Semantic Scholar Highly Influential Citations and the Altmetric Attention Scores.
Publications 11 00005 g003
Table 1. Characteristics of the included publications based on different metrics (N = 618).
Table 1. Characteristics of the included publications based on different metrics (N = 618).
BibliometricCategoryn (%)
SS 1 total citations (n = 615)zero139 (22.6)
1–10297 (48.3)
11–100164 (26.7)
>10015 (2.4)
SS HICs 2 (n = 615)zero460 (74.8)
184 (13.7)
2–555 (8.9)
6–109 (1.5)
>107 (1.1)
AAS 3 (n = 433)Zero 418 (4.2)
1–10327 (75.5)
11–10075 (17.3)
>10013 (3.0)
1 SS: Semantic Scholar; 2 HICs: Highly influential citations; 3 AAS: Altmetric attention score; 4 A score of 0 indicates that the software has not tracked any attention.
Table 2. Features of the top 10 included publications in descending order based on SS HICs.
Table 2. Features of the top 10 included publications in descending order based on SS HICs.
PublicationHICs 1Total SS 2 CitationsAAS 3Mendeley ReadersTotal SS Citations RankingAAS RankingSS HICs RankingMend. Readers Ranking
(Khader et al., 2020) [36]212634744416814
(Sallam, 2021) [37]2056326516421422
(Al-Balas et al., 2020) [38]161671666734836
(Rabi et al., 2020) [39]1543886171121541
(Khasawneh et al., 2020) [40]151521554936558
(Almaiah et al., 2020) [41]15301--3-6-
(Islam et al., 2020) [42]12120344661332717
(Sallam et al., 2021) [43]922616552659810
(Abuhammad, 2020) [44]962147929386913
(Odeh et al., 2020) [45]71081503143121012
1 HICs: Highly influential citations; 2 SS: Semantic Scholar; 3 AAS: Altmetric attention score.
Table 3. Comparing the total number of HICs originally detected by semantic scholar and the resulting values after manual inspection.
Table 3. Comparing the total number of HICs originally detected by semantic scholar and the resulting values after manual inspection.
PublicationSS HICs 1True HICs Percentage
(Khader et al., 2020) [36]26415.4%
(Sallam, 2021) [37]31619.4%
(Al-Balas et al., 2020) [38]243 12.5%
(Rabi et al., 2020) [39]1516.7%
(Khasawneh et al., 2020) [40]2229.1%
(Almaiah et al., 2020) [41]3800%
(Islam et al., 2020) [42]15426.7%
(Sallam et al., 2021) [43]18633.3%
(Abuhammad, 2020) [44]19526.3%
(Odeh et al., 2020) [45]9222.2%
1 Number of highly influential citations was updated on 3 January 2022, thereby the order of publications is different than that presented in Table 2.
Table 4. Characteristics of the top 10 COVID-19 related papers in the study in terms of Altmetric Attention Score (AAS).
Table 4. Characteristics of the top 10 COVID-19 related papers in the study in terms of Altmetric Attention Score (AAS).
Publication(Hajjo et al., 2021) [49](Yusef et al., 2020) [50](Glasbey et al., 2021) [51](Sallam, 2021) [37](Alafeef et al., 2020) [52](Qunaibi et al., 2021) [53](Alzoughool & Alanagreh, 2020) [54](Abu-Rumaileh et al., 2021) [55](Sallam et al., 2021) [43](Huy et al., 2021) [56]
AAS 11269773510265253195185168165146
Total SS 2 citations9471015631302821102261
HICs 300120611190
News mentions34138282842432118
Blog mentions3351701001
Policy mentions0301000000
Patent mentions0000000000
Twitter mentions27257557279152189162431617
Peer review mentions0000000000
Facebook mentions1000000101
Wikipedia mentions0020000000
Reddit mentions1200000000
Pinterest mentions0000000000
F1000 mentions0000000000
QampA mentions0000000000
Video mentions0010000000
Mendeley readers4810127916423381171893152627
SS citations ranking1934215111771011835417
AAS ranking12345678910
SS HICs ranking27816879214112731078233
Mendeley ranking216129372281116627710293
1 AAS: Altmetric attention score; 2 SS: Semantic Scholar; 3 HICs: Highly influential citations.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dardas, L.A.; Sallam, M.; Woodward, A.; Sweis, N.; Sweis, N.; Sawair, F.A. Evaluating Research Impact Based on Semantic Scholar Highly Influential Citations, Total Citations, and Altmetric Attention Scores: The Quest for Refined Measures Remains Illusive. Publications 2023, 11, 5. https://doi.org/10.3390/publications11010005

AMA Style

Dardas LA, Sallam M, Woodward A, Sweis N, Sweis N, Sawair FA. Evaluating Research Impact Based on Semantic Scholar Highly Influential Citations, Total Citations, and Altmetric Attention Scores: The Quest for Refined Measures Remains Illusive. Publications. 2023; 11(1):5. https://doi.org/10.3390/publications11010005

Chicago/Turabian Style

Dardas, Latefa Ali, Malik Sallam, Amanda Woodward, Nadia Sweis, Narjes Sweis, and Faleh A. Sawair. 2023. "Evaluating Research Impact Based on Semantic Scholar Highly Influential Citations, Total Citations, and Altmetric Attention Scores: The Quest for Refined Measures Remains Illusive" Publications 11, no. 1: 5. https://doi.org/10.3390/publications11010005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop