Next Article in Journal
Building NED: Open Access to Australia’s Digital Documentary Heritage
Previous Article in Journal
How Frequently Are Articles in Predatory Open Access Journals Cited
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three Commonly Utilized Scholarly Databases and a Social Network Site Provide Different, But Related, Metrics of Pharmacy Faculty Publication

by
Kyle J. Burghardt
1,*,
Bradley H. Howlett
1,
Audrey S. Khoury
1,
Stephanie M. Fern
1 and
Paul R. Burghardt
2
1
Wayne State University Eugene Applebaum College of Pharmacy and Health Sciences, 259 Mack Avenue, Suite 2190, Detroit, MI 48202, USA
2
Wayne State University Nutrition and Food Sciences, Science Hall, 5045 Cass Ave. Detroit, MI 48202, USA
*
Author to whom correspondence should be addressed.
Publications 2020, 8(2), 18; https://doi.org/10.3390/publications8020018
Submission received: 24 February 2020 / Revised: 26 March 2020 / Accepted: 27 March 2020 / Published: 1 April 2020

Abstract

:
Scholarly productivity is a critical component of pharmacy faculty effort and is used for promotion and tenure decisions. Several databases are available to measure scholarly productivity; however, comparisons amongst these databases are lacking for pharmacy faculty. The objective of this work was to compare scholarly metrics from three commonly utilized databases and a social networking site focused on data from research-intensive colleges of pharmacy and to identify factors associated with database differences. Scholarly metrics were obtained from Scopus, Web of Science, Google Scholar, and ResearchGate for faculty from research-intensive (Carnegie Rated R1, R2, or special focus) United States pharmacy schools with at least two million USD in funding from the National Institutes of Health. Metrics were compared and correlations were performed. Regression analyses were utilized to identify factors associated with database differences. Significant differences in scholarly metric values were observed between databases despite the high correlations, suggestive of systematic variation in database reporting. Time since first publication was the most common factor that was associated with database differences. Google Scholar tended to have higher metrics than all other databases, while Web of Science had lower metrics relative to other databases. Differences in reported metrics between databases are apparent, which may be attributable to the time since first publication and database coverage of pharmacy-specific journals. These differences should be considered by faculty, reviewers, and administrative staff when evaluating scholarly performance.

1. Introduction

One of the activities of faculty within research-intensive colleges of pharmacy is to add to the scientific record through research and scholarly activities. Scholarly metrics, or bibliometrics, generally refers to the analysis of individual productivity and impact through publication and citation counts [1]. Broadly, methods of research performance and impact evaluation for the purposes of promotion, merit, and tenure have been offered that include the Hirsch Index (H-index), journal impact factor, number of publications, authorship position on publication, and citation rates [2,3,4]. Such metrics are used as “judgment devices” by reviewers and the scientific community in evaluation of an individual researcher and the impact of their work [4]. Within schools of pharmacy, these same scholarly metrics are utilized in addition to service, teaching, and clinical activities in the review of faculty performance and promotion [5,6,7]. Previously, obtaining such measures required specialized software or was not easily accessible. Now, several common publication databases offer scholarly metric data for searched authors. Such databases include Scopus, Web of Science, and Google Scholar. Additionally, ResearchGate, a social networking site, provides scholarly metric data on user profiles. These easily accessible metrics allow for rapid individual determination as well as the collection and comparison of data across large groups.
These multiple, distinct scholarly metric databases offer faculty sources that can be utilized to measure self-performance and in the preparation of promotion and tenure packets or for the general assessment of research impact. Due to their inherent differences (i.e., software and algorithm designs), these databases may provide differing measures of scholarly productivity on the same faculty member, which can make reviews and evaluations of faculty performance more complex. For example, particular pharmacy-relevant journals may have coverage in only a subset of databases, leading to potential underestimation of metrics depending on a particular faculty member. To date, scholarly metrics from multiple databases have not been explored within College of Pharmacy faculty to understand what difference, if any, may exist in their measurements. Therefore, it is not known if scholarly productivity is measured differently between databases, what factors may potentially predict such differences, and if a given database is best able to measure the scholarly performance of pharmacy faculty.
The primary objective of this work was to identify differences, correlations, and predictors of differences in reporting metrics between Scopus, Web of Science, Google Scholar, and ResearchGate by collecting faculty scholarly metric data from research-intensive United States (U.S.) colleges of pharmacy. These findings will allow faculty members, peer reviewers of grants and promotion packets, administrators, and other key stakeholders to better understand how these databases represent faculty scholarly publication so that, in the future, objective standards of productivity and review may be set.

2. Materials and Methods

Faculty names from the included U.S. colleges of pharmacy were collected from the American Association of Colleges of Pharmacy (AACP) and individual college of pharmacy website directories according to our previous published methodology [8]. To be included in the present analysis, U.S. colleges of pharmacy must have had greater than 2 million USD in 2018 U.S. National Institutes of Health (NIH) funding according to AACP grant data and a Basic Carnegie Classification of R1 (very high), R2 (high), or special focus. We chose this cutoff since we wanted to analyze differences from a large, representative sample of research-intensive, NIH-funded U.S. colleges of pharmacy. The Carnegie classification utilizes several criteria to determine research intensity; “special focus” schools were included, since the number-one ranked pharmacy school is special focus, and we did not want to exclude research-intensive schools with this classification [9]. NIH funding was chosen due to its transparent reporting (as compared to foundation or industry funding, which is not readily available), and the median cutoff was used due to the skewed nature of funding across pharmacy schools. Although other definitions have been utilized to define research-intensive colleges of pharmacy, the inclusion of such a large sample size here may assist in minimizing these definitional differences [10,11,12]. Full-time academic faculty with rank were included in this analysis, while adjunct, research, and emeritus faculty were excluded. For each included faculty member, the following data were collected: (1) Name (for database searching), (2) position with rank, (3) department, and (4) university affiliation. Departments were categorized as “clinical” (e.g., Pharmacy Practice, etc.) or “basic” (e.g., Pharmaceutical Science, etc.).
Searches for scholarly metrics were performed in Scopus, Web of Science (WOS), Google Scholar, and the social networking site ResearchGate. Scopus, Web of Science, and Google Scholar were included because they each have wide coverage in biomedical sciences and pharmacy in particular [13,14,15,16]. Although each has a wide coverage, Google Scholar potentially covers more non-traditional sources, such as conference abstracts and graduate theses, which could influence reported metrics [17,18]. Additionally, differences do exist for pharmacy coverage. For example, Mendes and colleagues found that, of 285 identified pharmacy journals, 90% were found in Scopus, while 44.6% were found in Web of Science [19]. Additionally, on a qualitative level, citation counts from these three databases (or some combination thereof) are commonly reported (and within our university, at least one is required) on faculty promotion and tenure applications. Although ResearchGate, a social networking site, should not be considered a primary source for scholarly metric data, we chose to include this database given that it is a highly used social research network site with over 15 million users [20]. This means that, often, researchers are viewing other researchers, their work, and scholarly metrics through this database, making it a potential future model of combining standard scholarly metrics with proposed “altmetrics” [21,22]. However, other points discussed below should be considered for ResearchGate. For Scopus, Web of Science, and Google Scholar, the “author search” utility was used, and for ResearchGate, the general query search bar was used. The faculty member’s first, last, and middle initial were input along with their university affiliation (searches performed by B.H., S.F., and A.K.). If no record result was produced, a secondary author (K.B.) performed a verification search. Alternative searches were also performed with additional information obtained from a faculty directory profile, such as past universities, alternate names, or listed publications. An additional, randomized confirmatory search process (K.B.) of records was used to confirm the presence or absence of database records and the accuracy of extracted data. For each database, the following information was collected: Number of documents, H-index, total citations, and highest-cited articles. For Scopus, the year of first publication was collected to calculate the number of publishing years, as this information was readily obtainable from searches. For ResearchGate, the highest-cited article metric was not readily available for collection. Data for each faculty name were collected from their first publication up to June 2019, when the database searching was completed. This research utilizes non-human subject informational databases and, therefore, was exempt from ethics review. The primary research question to be answered was: “What are the differences in scholarly metric reporting amongst major databases, and can any individual or college-level factors predict identified differences?” We believe that individual or college-level factors may have effects on any potential database differences, since they may expose the underlying nature of these databases. For example, clinical faculty may more regularly publish in pharmacy journals that are only covered in Scopus or Google Scholar; therefore, this distinction may be helpful in explaining database differences.
Due to the skewed nature of scholarly metrics, we report the data with both means ± standard deviation and medians with interquartile ranges (IQR), and we compared each common metric between databases using Wilcoxon Signed Rank tests or Spearman Rho correlations [23]. For identification of predictive variables, first, student t-tests were performed to identify significant associations between a given variable and a difference in metrics between databases. Student t-tests were used here, as the calculated differences in metrics (e.g., difference in number of documents between Scopus and Web of Science, etc.) were normally distributed. Since 42 tests were performed for variable identification, a Bonferroni cutoff of 0.001 was utilized. Next, significant variables were entered into regressions. Specifically, linear regressions were performed where the calculated database difference variable (e.g., difference in number of documents between Scopus and WOS, etc.) was the dependent variable, and significant variables from the preceding step were entered as independent variables. Beta estimates and p-values were determined from regression analyses. A correction for multiple comparisons was made for the primary statistics in the manuscript (i.e., differences between databases, correlations between databases, and identification of predictors through regression analyses) by applying a Bonferroni significance level of <0.0004 (117 total tests). All analyses were performed with JMP statistical software version 14.0.

3. Results

A total of 3023 full-time faculty from 48 U.S. colleges of pharmacy were included (Table S1), which yielded 7713 records from the four searched databases. Table 1 presents an overview of the included faculty. Of note, the number of available Google Scholar and ResearchGate records was lower compared to Scopus and WOS.

3.1. Differences in Scholarly Metrics between Scopus, Web of Science, Google Scholar, and ResearchGate

Table 2 presents data for common scholarly metrics across the dataset and the percentage difference between them for each metric. All pairwise comparisons were significantly different.

3.2. Correlations between Scopus, Web of Science, Google Scholar, and ResearchGate

We performed correlations as another step to gauge metric agreements between the four included datasets (Table 3). The correlations between the databases for documents were above 0.89, and for H-index, the total citations and highest cited document were all above 0.9. Expanded correlations are found in Supplementary Table S2.

3.3. Factors Associated with Metric Differences

To identify individual and college-level factors that are potentially associated with database differences, we first performed student t-tests with available variables of interest to identify significant correlations with database differences. Next, these significant variables were entered into a regression model with the database metric as the dependent variable. Table 4 depicts the best estimates of these regressions.

4. Discussion

The study provides the first comparison of scholarly metrics from highly utilized, easily accessible databases and a social networking site, using data from research-intensive U.S. colleges of pharmacy. The data indicate that, despite high correlations in their measures, these databases have clear differences in their metric measures which, depending on the metric, is attributable to time or experience-based factors (i.e., faculty rank and years since first publication). The data provided here contribute to the current bibliometric analysis literature by offering insight into the possible differences and similarities between major bibliometric databases in addition to a popular research social network’s metrics. To our knowledge, this is the first description comparing the social networking site ResearchGate to other accepted databases; however, other work has similar correlations and differences when comparing Google Scholar, Scopus, and Web of Science in metrics such as H-index, citation growth rates, and number of citations for each paper per author [24,25,26,27]. Similarly to the findings described here, these previous studies have found that Google Scholar provides higher estimates of citation and publication counts, while Web of Science tends to produce significantly lower citation counts [18,24,28]. The approaches in these previous studies were different in that they utilized smaller sample sizes, random selections of researchers across disciplines, or specific journals of research papers for comparisons between databases. Our method of including a large sample size of pharmacy researchers potentially reduces the bias of research field type and specialty. Ultimately, our work adds to this body of literature by aiming to understand how database coverage and algorithms can potentially influence bibliometric estimations.
The Spearman Rho correlations for corresponding metrics between Scopus, Web of Science, Google Scholar, and ResearchGate were all above 0.89, suggesting a high degree of similarity in scholarly metric measurement despite statistically significant differences when tested by the Mann Whitney U. Indeed, the correlations were also significant at the Bonferroni cut-off level of 0.0004. On a qualitative level, Google Scholar had much higher metrics compared to all other databases. Scopus and ResearchGate tended to have estimates with the smallest differences, while Web of Science tended to have the lowest estimates compared to the other databases. These observed differences, yet high correlations, could be due to the large database used here, with over 3000 faculty members and over 7000 database records amongst the faculty members available for analysis. Such differences may be important factors for consideration during faculty evaluations through peer assessment or administrative reviews.
In our analyses to identify potential individual or college-level explanatory variables that could account for the observed differences between databases, it was evident that total publishing years and faculty rank had significant effects on the observed differences. For example, for the difference in documents between Scopus and Web of Science, each year of publication produced approximately 0.4 more documents in Scopus compared to Web of Science. This suggests that as a faculty’s total publishing time grows, more documents are captured in the Scopus database compared to the WOS database. Less often, faculty rank had effects on database differences in a pattern similar to that observed with total publishing years. The effects of time and/or seniority factors on database difference may be expected, as they may reflect a wider reach of research and collaborative efforts often observed with senior researchers. Another study, evaluating citation differences in three top journals based on database, identified group authorship (authorship by aggregated groups, such as Consolidated Standards of Reporting Trials (CONSORT)) as the only factor in their analyses significantly associated with the citation differences between databases [29].
It is possible that systematic differences, including content coverage and sources for each database, may account for many of the differences described in Table 2 above [15,30]. Scopus covers over 22,800 titles with 69 million records, while Web of Science’s default “Core Collection” includes greater than 21,177 titles with 74 million records [31,32]. This coverage difference is relevant to pharmacy, where certain journals, such as Pharmacy Times, are included in Scopus, but not Web of Science. It may be possible that this is reflected in the data presented here, where all metrics were at least 8% greater in Scopus relative to Web of Science. It may better represent the scholarly productivity of pharmacy faculty, especially clinical faculty that may publish in such pharmacy-based journals, to utilize Scopus metrics in self-evaluations, yearly performance reviews, and promotion and tenure. Interestingly, our multivariate analyses did not identify a significant effect of department type (clinical versus basic) on measured differences, possibly contradicting these hypotheses. It should be noted that the total citations metric was significantly associated with department type (non-significant factor in multivariate analyses), and the total documents metric was non-significantly associated with department type due to the Bonferonni cutoff (p = 0.001).
Google Scholar’s coverage is not fully known, but it is estimated they have at least 369 million records and may better capture international works, which likely explains the higher metrics reported here [30,33]. Additionally, Google Scholar’s metrics include documents like abstracts and thesis/dissertations, which may not be covered as well in other databases. Content coverage by ResearchGate is not readily available, and their website states that they “import citation data from different sources”. To this end, the transparency of ResearchGate as a site for scholarly metric data is lacking. Often, social networking sites aim to incorporate as many users as possible to increase advertising revenue, potentially making their scholarly metric accuracy secondary to other goals. Additionally, ResearchGate has promoted some of their own metrics, such as “Reads” and their “RG score”, which have not been validated, found to have little correlation to established metrics but highly correlation to user activity on the site [34,35,36,37]. Of note, our study only utilized total documents, H-index, and total citations in our analyses, which had strong correlations to Scopus metrics (Spearman Rho > 0.8). Finally, ResearchGate’s activities, including methods of sharing full-text articles, have been denounced amongst researchers and cited as violating copyright laws [38,39]. In our opinion, despite the high correlations observed in our study, ResearchGate should not be used as a primary source for scholarly metric data without future transparency and assessments.
From an expanded correlational analysis of all metrics between all databases (Table S2), it is evident that lower correlations are observed between publication number and total citations or highest-cited article. This may suggest that the impact of scholarly activity (by citations alone) is not completely correlated with and not fully dependent on the number of publications by a given faculty member [40]. This consideration has been explored by others with proposed alternatives to explore scholarly impact, such as combined article-level metrics, web-based metrics, social-media-based metrics, and comprehensive approaches [41,42,43]. Future work should identify the predictors of total citations or highly cited articles amongst college of pharmacy faculty along with the effect of the database. Such work would also be useful in training junior faculty in approaches to creating high-impact scholarly products.

5. Limitations and Conclusions

A few limitations should be considered for this study. First, Google Scholar and ResearchGate are self-created profiles compared to Scopus and Web of Science. This is likely the primary reason for the lower number of Google Scholar and ResearchGate records available for analysis. There were many authors with records composed of a high number of documents in Scopus and Web of Science that did not return a profile in Google Scholar and/or ResearchGate, demonstrating the self-creation and self-curation required in these two later databases. The high number of overall records allows us to perform statistical testing between databases; however, is it possible that our results are biased toward faculty that are more likely to self-create these profiles (e.g., faculty with high scholarly productivity, etc.). Additionally, we did not define criteria for including a database. Based on our experience, Web of Science, Scopus, and Google Scholar are now commonly reported on faculty of pharmacy promotion and tenure packets. ResearchGate is not reported as an official metric source; however, we thought that it was useful to include statistics from this social media site, given that many researchers are now using it to “look up” one another, including the metrics in a profile. Second, although our method of collecting faculty profiles was comprehensive, this does not rule out missed or new faculty. To that end, this analysis should be considered a snapshot of the scholarly metrics, and future updates may be needed. The analyses also included covariates that may have a relationship (e.g., time since first publication and faculty rank); there may be other covariates not captured in this study, such as degree of collaboration and specialization or primary field of study, that could influence the results. Although these data are useful for faculty and administrative personnel in colleges of pharmacy, it should be noted that our analysis was restricted to research-intensive U.S. colleges of pharmacy based on NIH funding of wo million or more USD. Thus, these analyses and data may not represent faculty members from non-research-intensive schools. Due to the time-intensive nature required to capture multiple databases on such a large profile of faculty from colleges of pharmacy, our snapshot of 2019 data does not represent the everchanging landscape of metric data. Therefore, future updates may be needed as database coverage is expanded or altered and the faculty makeup of colleges of pharmacy evolves over time.
The data presented here found differences between scholarly metric databases and a social media site despite high overall correlations. Many of these differences can be at least partly attributed to time since first publication by a faculty member; furthermore, they suggest systematic reporting differences between databases. This study provides objective real-world data on which faculty, administrators, and promotion peer-reviewers can understand possible database differences in reporting of scholarly accomplishments. In our opinion, this study does not suggest that one database is more useful or accurate over another, but may suggest that differences exist and that certain databases may provide superior coverage for particular research areas (i.e., clinical pharmacy journals). Future work should consider comparisons with newer transparent databases, such as Dimensions from Digital Science, which further incorporates grant data with standard scholarly metric data. This will assist in validating and comparing the ever-expanding landscape of citation databases.

Supplementary Materials

The following are available online at https://www.mdpi.com/2304-6775/8/2/18/s1: Table S1: Included Schools; Table S2: Full Metric Correlation Table.

Author Contributions

Conceptualization, K.J.B. and P.R.B.; methodology, K.J.B. and P.R.B.; validation, K.J.B. and P.R.B.; formal analysis, K.J.B. and P.R.B.; investigation, K.J.B., B.H.H., A.S.K., S.M.F., and P.R.B.; data curation, K.J.B., B.H.H., A.S.K., S.M.F., and P.R.B.; writing—original draft preparation, K.J.B. and P.R.B.; writing—review and editing, K.J.B., B.H.H., A.S.K., S.M.F., and P.R.B.; supervision, K.J.B., B.H.H., and P.R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

None.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pritchard, A. Statistical bibliography or bibliometrics. J. Doc. 1969, 25, 348–349. [Google Scholar]
  2. Egghe, L. The Hirsch index and related impact measures. Annu. Rev. Inf. Sci. Technol. 2010, 44, 65–114. [Google Scholar] [CrossRef]
  3. Hammarfelt, B. Recognition and reward in the academy: Valuing publication oeuvres in biomedicine, economics and history. Aslib J. Inf. Manag. 2017, 69, 607–623. [Google Scholar] [CrossRef]
  4. Hammarfelt, B.; Rushforth, A.D. Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation. Res. Eval. 2017, 26, 169–180. [Google Scholar] [CrossRef]
  5. Kangethe, A.; Franic, D.M.; Huang, M.Y.; Huston, S.; Williams, C.U.S. publication trends in social and administrative pharmacy: Implications for promotion and tenure. Res. Soc. Adm. Pharm. Rsap 2012, 8, 408–419. [Google Scholar] [CrossRef] [PubMed]
  6. Yancey, A.M.; Pitlick, M.; Woodyard, J.L. Utilization of external reviews by colleges of pharmacy during the promotion and tenure process for pharmacy practice faculty. Curr. Pharm. Teach. Learn 2017, 9, 255–260. [Google Scholar] [CrossRef] [PubMed]
  7. Kennedy, D.R.; Calinski, D.M. P&T and Me. Am. J. Pharm. Educ. 2018, 82, 7048. [Google Scholar] [CrossRef]
  8. Burghardt, K.J.; Howlett, B.H.; Fern, S.M.; Burghardt, P.R. A bibliometric analysis of the top 50 NIH-Funded colleges of pharmacy using two databases. Res. Soc. Adm. Pharm. Rsap 2019. [Google Scholar] [CrossRef]
  9. Kosar, R.; Scott, D.W. Examining the Carnegie Classification Methodology for Research Universities. Stat. Public Policy 2018, 5, 1–12. [Google Scholar] [CrossRef] [Green Version]
  10. Bloom, T.J.; Schlesselman, L. Publication rates for pharmaceutical sciences faculty members at nonresearch-intensive US schools of pharmacy. Am. J. Pharm. Educ. 2015, 79, 136. [Google Scholar] [CrossRef] [Green Version]
  11. Thompson, D.F.; Nahata, M.C. Pharmaceutical science faculty publication records at research-intensive pharmacy colleges and schools. Am. J. Pharm. Educ. 2012, 76, 173. [Google Scholar] [CrossRef] [PubMed]
  12. Thompson, D.F.; Harrison, K. Basic science pharmacy faculty publication patterns from research-intensive US Colleges, 1999–2003. Pharm. Educ. 2005, 5, 83–86. [Google Scholar] [CrossRef]
  13. Gorraiz, J.; Schloegl, C. A bibliometric analysis of pharmacology and pharmacy journals: Scopus versus Web of Science. J. Inf. Sci. 2008, 34, 715–725. [Google Scholar] [CrossRef] [Green Version]
  14. Harzing, A.-W. Two new kids on the block: How do Crossref and Dimensions compare with Google Scholar, Microsoft Academic, Scopus and the Web of Science? Scientometrics 2019, 120, 341–349. [Google Scholar] [CrossRef]
  15. Falagas, M.E.; Pitsouni, E.I.; Malietzis, G.A.; Pappas, G. Comparison of PubMed, Scopus, web of science, and Google scholar: Strengths and weaknesses. Faseb J. 2008, 22, 338–342. [Google Scholar] [CrossRef] [PubMed]
  16. Sarkozy, A.; Slyman, A.; Wu, W. Capturing citation activity in three health sciences departments: A comparison study of Scopus and Web of Science. Med Ref. Serv. Q. 2015, 34, 190–201. [Google Scholar] [CrossRef] [PubMed]
  17. Martín-Martín, A.; Orduna-Malea, E.; Thelwall, M.; Delgado-López-Cózar, E. Google Scholar, Web of Science, and Scopus: Which is best for me? Impact Soc. Sci. Blog 2019. Available online: https://blogs.lse.ac.uk/impactofsocialsciences/2019/12/03/google-scholar-web-of-science-and-scopus-which-is-best-for-me/ (accessed on 19 March 2020).
  18. Martín-Martín, A.; Orduna-Malea, E.; Thelwall, M.; Delgado López-Cózar, E. Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. J. Informetr. 2018, 12, 1160–1177. [Google Scholar] [CrossRef] [Green Version]
  19. Mendes, A.M.; Tonin, F.S.; Buzzi, M.F.; Pontarolo, R.; Fernandez-Llimos, F. Mapping pharmacy journals: A lexicographic analysis. Res. Soc. Adm. Pharm. 2019, 15, 1464–1471. [Google Scholar] [CrossRef]
  20. O’Brien, K. ResearchGate. J. Med. Libr. Assoc. 2019, 107, 284–285. [Google Scholar] [CrossRef] [Green Version]
  21. Patthi, B.; Prasad, M.; Gupta, R.; Singla, A.; Kumar, J.K.; Dhama, K.; Ali, I.; Niraj, L.K. Altmetrics—A Collated Adjunct Beyond Citations for Scholarly Impact: A Systematic Review. J. Clin. Diagn. Res. Jcdr 2017, 11, Ze16–Ze20. [Google Scholar] [CrossRef] [PubMed]
  22. Huang, C.; Zha, X.; Yan, Y.; Wang, Y. Understanding the Social Structure of Academic Social Networking Sites: The Case of ResearchGate. Libri 2019, 69, 189–199. [Google Scholar] [CrossRef]
  23. Albarrán, P.; Crespo, J.A.; Ortuño, I.; Ruiz-Castillo, J. The skewness of science in 219 sub-fields and a number of aggregates. Scientometrics 2011, 88, 385–397. [Google Scholar] [CrossRef] [Green Version]
  24. Harzing, A.-W.; Alakangas, S. Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison. Scientometrics 2016, 106, 787–804. [Google Scholar] [CrossRef]
  25. De Groote, S.L.; Raszewski, R. Coverage of Google Scholar, Scopus, and Web of Science: A case study of the h-index in nursing. Nurs. Outlook 2012, 60, 391–400. [Google Scholar] [CrossRef]
  26. Walker, B.; Alavifard, S.; Roberts, S.; Lanes, A.; Ramsay, T.; Boet, S. Inter-rater reliability of h-index scores calculated by Web of Science and Scopus for clinical epidemiology scientists. Health Inf. Libr. J. 2016, 33, 140–149. [Google Scholar] [CrossRef]
  27. Miri, S.M.; Raoofi, A.; Heidari, Z. Citation Analysis of Hepatitis Monthly by Journal Citation Report (ISI), Google Scholar, and Scopus. Hepat. Mon. 2012, 12, e7441. [Google Scholar] [CrossRef] [Green Version]
  28. Harzing, A.-W.; Wal, R. Google Scholar as a New Source for Citation Analysis. Ethics Sci. Environ. Politics 2008, 8, 61–73. [Google Scholar] [CrossRef]
  29. Kulkarni, A.V.; Aziz, B.; Shams, I.; Busse, J.W. Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals. JAMA J. Am. Med. Assoc. 2009, 302, 1092–1096. [Google Scholar] [CrossRef] [PubMed]
  30. Gusenbauer, M. Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases. Scientometrics 2019, 118, 177–214. [Google Scholar] [CrossRef] [Green Version]
  31. Scopus. Scopus Content Coverage Guide. Available online: https://www.elsevier.com/__data/assets/pdf_file/0007/69451/0597-Scopus-Content-Coverage-Guide-US-LETTER-v4-HI-singles-no-ticks.pdf (accessed on 21 February 2020).
  32. Science, W.O. Web of Science Platform: Web of Science: Summary of Coverage. Available online: https://clarivate.libguides.com/webofscienceplatform/coverage (accessed on 21 February 2020).
  33. Khabsa, M.; Giles, C.L. The number of scholarly documents on the public web. PLoS ONE 2014, 9, e93949. [Google Scholar] [CrossRef] [PubMed]
  34. Kraker, P.; Lex, E. A critical look at the ResearchGate score as a measure of scientific reputation. In Proceedings of the Quantifying and Analysing Scholarly Communication on the Web Workshop (ASCW’15), Web Science Conference, Oxford, UK, June 28–July 1 2015. [Google Scholar]
  35. Hoffmann, C.P.; Lutz, C.; Meckel, M. A relational altmetric? Network centrality on R esearch G ate as an indicator of scientific impact. J. Assoc. Inf. Sci. Technol. 2016, 67, 765–775. [Google Scholar] [CrossRef] [Green Version]
  36. Shrivastava, R.; Mahajan, P. Relationship amongst ResearchGate altmetric indicators and Scopus bibliometric indicators. New Libr. World 2015, 116, 564–577. [Google Scholar] [CrossRef]
  37. Kraker, P.; Jordan, K.; Lex, E. The ResearchGate Score: A good example of a bad metric. Impact Soc. Sci. Blog 2015. Available online: https://blogs.lse.ac.uk/impactofsocialsciences/2015/12/09/the-researchgate-score-a-good-example-of-a-bad-metric/ (accessed on 26 March 2020).
  38. Jamali, H.R. Copyright compliance and infringement in ResearchGate full-text journal articles. Scientometrics 2017, 112, 241–254. [Google Scholar] [CrossRef]
  39. Cleary, M.; Campbell, S.; Sayers, J.; Kornhaber, R. Using ResearchGate Responsibly: Another Resource for Building Your Profile as a Nurse Author. Nurse Author Ed. 2016, 26, 7. [Google Scholar]
  40. Ruocco, G.; Daraio, C.; Folli, V.; Leonetti, M. Bibliometric indicators: the origin of their log-normal distribution and why they are not a reliable proxy for an individual scholar’s talent. Palgrave Commun. 2017, 3, 17064. [Google Scholar] [CrossRef]
  41. Weller, K. Social media and altmetrics: An overview of current alternative approaches to measuring scholarly impact. In Incentives and Performance; Springer International Publishing: Cham, Switzerland, 2015; pp. 261–276. [Google Scholar]
  42. Dinsmore, A.; Allen, L.; Dolby, K. Alternative perspectives on impact: the potential of ALMs and altmetrics to inform funders about research impact. PLoS Biol. 2014, 12, e1002003. [Google Scholar] [CrossRef]
  43. Braithwaite, J.; Herkes, J.; Churruca, K.; Long, J.C.; Pomare, C.; Boyling, C.; Bierbaum, M.; Clay-Williams, R.; Rapport, F.; Shih, P.; et al. Comprehensive Researcher Achievement Model (CRAM): A framework for measuring researcher achievement, impact and influence derived from a systematic literature review of metrics and models. BMJ Open 2019, 9, e025320. [Google Scholar] [CrossRef]
Table 1. Overview of faculty and records included in the analysis (n = 3023).
Table 1. Overview of faculty and records included in the analysis (n = 3023).
N (%) or Mean ± SD
Academic Rank
Assistant Professor
1057 (35.0)
Associate Professor910 (30.1)
Professor1056 (34.9)
Department Type
Clinical Science Faculty
1848 (61.1)
Basic Science Faculty1175 (38.9)
Number of Records
Number of Scopus Records
2798 (92.6)
Number of WOS Records2536 (83.9)
Number of Google Scholar Records871 (28.8)
Number of ResearchGate Records1508 (49.9)
Average years since first publication 19.9 ± 12.2
Table 2. Differences in common scholarly metrics across included datasets for College of Pharmacy faculty.
Table 2. Differences in common scholarly metrics across included datasets for College of Pharmacy faculty.
Total DocumentsH-IndexTotal CitationsHighest Cited
Scopus (n = 2798)
Mean (SD)56.6 (78.7)16.4 (16.1)2090 (4720)298 (1490)
Median (IQR)30.0 (10.0–73.0)12.0 (4.0–24.0)569 (76.5–2170)101 (27.8–270)
Web of Science (n = 2536)
Mean (SD)48.4 (67.0)15.6 (15.4)1900 (4380)283 (1560)
Median (IQR)24.0 (8.00–62.0)11.0 (4.00–23.0)488 (78.0–1900)94.0 (28.0–257)
Google Scholar (n = 871)
Mean (SD)141 (186)27.4 (20.4)5030 (9550)790 (3430)
Median (IQR)86.0 (44.0–169)23.0 (13.0–36.0)2150 (673–5100)273 (123–598)
ResearchGate (n = 1508)
Mean (SD)61.5 (80.9)16.6 (14.6)1950 (4680)-
Median (IQR)36.0 (14.0–77.0)13.0 (6.00–24.0)661 (131–2130)-
% Difference
SC to WOS+21.4+8.83+15.0+9.07
SC to GS−50.6−20.0−43.7−41.2
SC to RG−4.59+2.08−2.51--
WOS to GS−64.8−26.4−53.9−46.4
WOS to RG−25.2−6.72−16.6--
GS to RG+39.9+18.5+38.9--
All differences above were statistically significant (p-value < 0.0004) based on a Wilcoxon Rank Sum Test. -- indicates that the variable was not available for ResearchGate.
Table 3. Spearman Rho correlations of scholarly metric data between included datasets.
Table 3. Spearman Rho correlations of scholarly metric data between included datasets.
Documents ScopusH-Index ScopusTotal Citations ScopusHighest Cited Scopus
Documents WOS0.9600.9470.9210.797
H-index WOS0.9230.9740.9540.848
Total citations WOS0.9000.9580.9690.901
Highest cited WOS0.7810.8540.9030.931
Documents Google Scholar0.9300.8510.8130.634
H-index Google Scholar0.9170.9650.9350.775
Total citations Google Scholar0.8750.9430.9670.868
Highest cited Google Scholar0.6820.7780.8590.933
Documents ResearchGate0.8880.8300.8020.668
H-index ResearchGate0.8550.9040.8820.766
Total citations ResearchGate0.8300.8890.9000.828
Spearman Rho correlations were performed between corresponding metrics between databases. All correlations were statistically significant (p < 0.0004).
Table 4. Beta estimates for factors associated with metric differences.
Table 4. Beta estimates for factors associated with metric differences.
SC to WOSSC to GSWOS to GSGS to RG
Total documents
Department type- -- -- -−4.31
Assistant Professor−2.1615.215.8−16.6
Associate Professor−2.49 *11.313.0−10.4
Years since first publication0.387 *−2.38 *−2.96 *0.992
H-index
Department type- -0.156- -- -
Assistant Professor−0.424 *1.52 *1.78 *−1.10
Associate Professor−0.07790.5090.480−0.568
Years since first publication0.0199−0.0512−0.0969 *0.0677
Total Citations
Department type−30.4142103−232
School funding rank45.8- -- -- -
Assistant Professor−71.1322310−501
Associate Professor−55.7590617−409
Years since first publication5.21−86.1 *−104 *32.4
Highest-Cited Article
Assistant Professor- -91.177.8- -
Associate Professor- -90.994.2- -
Years since first publication- -−10.8−12.4- -
Abbreviations: GS = Google Scholar; RG = ResearchGate; SC = Scopus; WOS = Web of Science. * indicates statistically significant parameter within linear regression (p < 0.0004). - - indicates that the variable was not included in the model. Department type is made up of two levels (clinical and basic) with basic serving as the reference category. Therefore, numbers in this row are clinical relative to the basic science department. Academic Rank is made up of three levels (assistant, associate, and full professor), with full professor serving as the reference category in the regression. Thus, the values for the assistant and associate professor variables are relative to the professor level within the database difference column. A positive value represents an increase in the assistant or associate level relative to the professor level for a given scholarly metric. SC to RG are not shown because only years since first publication were significantly associated with the H-index difference (beta = 0.0440, p = 0.0002). WOS to RG are not shown because only faculty rank and years since first publication were associated with total document difference; assistant professor (beta = 7.18, p = 0.0007), associate professor (beta = 4.74, p = 0.0037), and years since first publication (beta = −0.0813, p = 0.547).

Share and Cite

MDPI and ACS Style

Burghardt, K.J.; Howlett, B.H.; Khoury, A.S.; Fern, S.M.; Burghardt, P.R. Three Commonly Utilized Scholarly Databases and a Social Network Site Provide Different, But Related, Metrics of Pharmacy Faculty Publication. Publications 2020, 8, 18. https://doi.org/10.3390/publications8020018

AMA Style

Burghardt KJ, Howlett BH, Khoury AS, Fern SM, Burghardt PR. Three Commonly Utilized Scholarly Databases and a Social Network Site Provide Different, But Related, Metrics of Pharmacy Faculty Publication. Publications. 2020; 8(2):18. https://doi.org/10.3390/publications8020018

Chicago/Turabian Style

Burghardt, Kyle J., Bradley H. Howlett, Audrey S. Khoury, Stephanie M. Fern, and Paul R. Burghardt. 2020. "Three Commonly Utilized Scholarly Databases and a Social Network Site Provide Different, But Related, Metrics of Pharmacy Faculty Publication" Publications 8, no. 2: 18. https://doi.org/10.3390/publications8020018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop