Next Article in Journal
Non-Pathological Opacification of the Cavernous Sinus on Brain CT Angiography: Comparison with Flow-Related Signal Intensity on Time-of-Flight MR Angiography
Next Article in Special Issue
Engaging Mothers on the Growth of School-Age Children in a Rural South African Health and Demographic Site: A Qualitative Insight
Previous Article in Journal
An Evidence-Based Strategic Approach to Prevention and Treatment of Dry Eye Disease, a Modern Global Epidemic
Previous Article in Special Issue
Emotions, Feelings, and Experiences of Social Workers While Attending to Vulnerable Groups: A Qualitative Approach
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Current Issues on Research Conducted to Improve Women’s Health

Charalampos Siristatidis
Vasilios Karageorgiou
2 and
Paraskevi Vogiatzi
Assisted Reproduction Unit, Second Department of Obstetrics and Gynecology, Medical School, National and Kapodistrian University of Athens, Aretaieion Hospital, 76 Vass Sofias, 11528 Athens, Greece
2nd Department of Psychiatry, Medical School, National and Kapodistrian University of Athens, Attikon Hospital, 1 Rimini Street, 12642 Athens, Greece
Andromed Health & Reproduction Diagnostic Lab, 3 Mesogion Str, 15126 Maroussi, Greece
Author to whom correspondence should be addressed.
Healthcare 2021, 9(1), 92;
Submission received: 2 December 2020 / Revised: 5 January 2021 / Accepted: 13 January 2021 / Published: 17 January 2021


There are varied lessons to be learned regarding the current methodological approaches to women’s health research. In the present scheme of growing medical literature and inflation of novel results claiming significance, the sheer amount of information can render evidence-based practice confusing. The factors that classically determined the impact of discoveries appear to be losing ground: citation count and publication rates, hierarchy in author lists according to contribution, and a journal’s impact factor. Through a comprehensive literature search on the currently available data from theses, opinion, and original articles and reviews on this topic, we seek to present to clinicians a narrative synthesis of three crucial axes underlying the totality of the research production chain: (a) critical advances in research methodology, (b) the interplay of academy and industry in a trial conduct, and (c) review- and publication-associated developments. We also provide specific recommendations on the study design and conduct, reviewing the processes and dissemination of data and the conclusions and implementation of findings. Overall, clinicians and the public should be aware of the discourse behind the marketing of alleged breakthrough research. Still, multiple initiatives, such as patient review and strict, supervised literature synthesis, have become more widely accepted. The “bottom-up” approach of a wide dissemination of information to clinicians, together with practical incentives for stakeholders with competing interests to collaborate, promise to improve women’s healthcare.

1. Introduction

Women’s health has been at the center of interest and growing concern in the last few decades. As a measurable outcome, it has been studied at the level of mortality [1], serious morbidity [2], and nutritional status [3] and through proven, evidence-based interventions. The implementation of such interventions is essential to guide national and international policies and programs, targeting the achievement of universal coverage of health services. In this respect, conducting the best quality of research (research that provides firm and ethical evidence adhering to the principles of professionalism, transparency, and auditability) with the use of robust methods is mandatory. Towards this goal, the current reality is far from encouraging.
In accordance with scientific literature guidelines and research quality guidelines (e.g., the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)), impact factors, and citation count are considered the norms in current research evaluation modalities. However, recent works in research methodology challenge this simplifying notion [4,5,6].
The pitfalls reported are associated with various—albeit specific—“cultural, ethical, operational, regulatory, and infrastructural factors” linked with a lack of adequately trained researchers and subject attrition bias [7]. As a result, the clinical research environment is more or less inseminated with various types of bias, leading to the discouragement of sponsors. The growing plethora of questionable quality trials and reviews is another issue to consider. “From 14 reports of trials published per day in 1980 [to] 75 trials and 11 systematic reviews of trials per day, a plateau in growth has not yet been reached”, stated a policy forum article reported in 2010 [8]; additionally, the authors noted that “the staple of medical literature synthesis remains the non-systematic narrative review”, further pointing out the need for freely available simple yet valid answers to most patients’ questions [8]. In the same context, current data concerning women’s health derived from protocols, full study reports, and participant-level datasets are rarely available to a wide audience. At the same time, selective reporting of methods and results plagues reports. With the reduced quality of information produced, a lot of money has been wasted; subsequently, the existence of all kinds of bias affect the research itself and jeopardize the validity of the findings and, consequently, the care of women [9].
Issues have been raised by exploring different approaches to evaluate the quality of scientific input to the community. The ultimate goal remains a robust and uniform literature evaluation system adapted to the evolving conduct of studies and to the application of modern tools to re-ensure robust methodology and reporting of data and results. Here, we perform a narrative overview on current issues in study quality assessment regarding clinical medicine. We electronically searched PubMed using the following keywords: “clinical trials”, “meta-analysis”, “IPD”, “sponsor”, “challenges”, “regulatory”, “women’s health”, “evidence-based medicine/trends”, “policy making”, “publishing”, “research methods and practices”, “consumers network”, “bias”, “industry-sponsored trials”, “biomedical bibliographic databases”, and “quality research”, trying to collect data irrespective of type of report and language. Based on this evidence, we propose a combination of interventions at various levels, underlining quality aspects that we consider significant, and other routes of judgments.

2. The “Standard” Factors

Citation count and publication rates in international databases, hierarchy in author lists according to contribution, and the impact factor of the journal are considered important factors in the quality of a study, especially for the “scientific reader” seeking quality information on a specific topic. This has been extensively studied by other workgroups [10,11,12,13] and represents a justified trend accounting for the prestige of a scientific journal and the publication itself, along with language and availability, and ultimately skewing scientific trends or potentially leaving some important contributions in obscurity. Even though previous works contemplate the importance of the aforementioned factors in the true quality of research studies and publications, all considerations are derived from a common denominator, that is, that the currently used quality standards either for the common user or for greater structures and institutions most probably do not reflect quality but rather popularity. In this context, the citation rate (including self-citation and “negative citing”) and an impact statement on the individual author (a concise summary of the impact of somebody’s career) have been proposed. In addition, other metrics, including altmetrics, bibliometrics, and H-index, combined with updated mathematical models, such as artificial neural networks, might be the tools of the future; these models constitute more accurate tools due to the special characteristics of these “learning through training” processes, resembling the capacity of the brain to learn and judge [14].

3. The Type of Research Question and Studies

Although multiple outcomes may be reported at once and variability in study designs fluctuates, a primary role belongs to the type of research question explored by a study or publication, which will inevitably determine the methodology to be followed. For example, in past years, there is a disproportionate output of Systematic Reviews (SRs) and meta-analyses from Asian countries produced on a massive scale [15,16] as a means of “publishing in order to publish” with questionable quality and methods. Their numbers are so high that, in some cases, it overtakes original trials. Of note, the use of such studies in the biomedical field was occasional until the 1990s [17]. Moreover, those from the Cochrane collaboration, the fundamental organization for good quality systematic reviews, are only a small fraction of this output [8].
With regard to Randomized Controlled Trials (RCTs), suggestions have been made in recent reports on better conduct [18]: trial protocols should be simple, reproducible, and well organized, with predefined and well-described study populations/participants and should have sound interventions, and representative comparisons and outcomes. Of note, these could be based on the conclusions of previously conducted SRs that often point out issues in quality and methodology of the original trials. Minimal deviations from protocols and a priori specification of useful core outcomes that translate directly to women’s wellbeing are the focus of the CROWN Initiative [19]. According to the authors, there has been a multi-targeted set of suggestions to “ensure that critical and important outcomes with good measurement properties are incorporated and reported, in advancing the usefulness of research, in informing readers, including guideline and policy developers, who are involved in decision-making, and in improving evidence-based practice”.
A priori description of the outcomes of interest can alleviate the known issues/biases associated with exploratory analyses. A change in outcome, especially in cases where the results do not support the rationale of the study, can mask the original intentions of the authors and can recontextualize the same results in a more positive manner [20]. Still, an exploratory analysis has a significant role in deducing potentially valuable conjectures for future studies. However, it is central for transparency that the authors explicitly state when this is the case, i.e., when an analysis is conducted post hoc. In order to ease the distinction of post hoc and a priori analyses by SR authors and readers, Dwan et al. (2014) proposed the publication of both protocols and pre-specified analyses [21].
We cannot anticipate that SRs can retrospectively solve the potential gaps and inconsistencies in the methodology and outcome reporting. For robust answers, research questions must be well defined from the start. However, more elaborate techniques of evidence synthesis can guide future research in more meaningful ways and are becoming more popular. Specifically, prospective and individual patient data meta-analyses (IPDMA) may need to become the norm in literature synthesis [22,23]. A major difficulty in IPDMA is the fact that securing sensitive patient data is a time-consuming task that demands the establishment of mutual trust. Even when representative evidence has been secured, data availability may still affect the pooled evidence. A recent study assessing IPDMA’s treating oncological topics suggested that studies for which they were available differed significantly from studies in which the authors did not share them [24]. Still, IPDMA is a trustworthy methodology that can assess the effect of patient-level covariates on treatment outcomes or diagnostic accuracy more thoroughly than the standard procedure of a meta-regression used in aggregate-data meta-analyses [25]. Given the current ethos of openness in clinical trials and common repositories becoming more widespread, IPDMA is likely to become the mainstay of critical synthesis of literature [26].
Finally, we have to include observational research in an effort to improve women’s health in the context of greater personalization of care and stratified medicine. Such studies have traditionally served as tools for understanding the nature of particular clinical conditions, for determining risk factors and mechanisms of actions, and for identifying potential intervention targets. Their disadvantages associated with methodological issues such as confounds and the fact that they are prone to limited internal validity could be restricted through guidelines such as the strengthening the reporting of observational studies in epidemiology (STROBE) statement [27].

4. The Inclusion of Young Authors

The encouragement of younger and/or less experienced scientists and ultimately their inclusion in the respective workgroups and in the list of contributors may provide an unexpected topic or question and a clearer view on established research schemes. The productivity of highly cited papers is related to the advanced age of their authors; adversely, better funding opportunities for younger researchers would give them unique chances to build strong productivity [28]. The advancement of knowledge taught during academic training as well as a higher probability of compliance with robust methods of reporting should encourage the inclusion of younger scientists. Tips and recommendations of young authors and early career scientists have been plenty, including collaborating with researchers within as well as outside their field and/or country, sending their research article to an appropriate journal, and adequately highlighting the novelty and impact of their research [29,30].
Towards this goal, the improvement of the scientific literacy of young scholars is the main step, and this burden falls on to the shoulders of the trainers. There are “uncomfortable truths” in training [31], but scientific research and the mode of thinking are processes continuously accumulated and must be taught by each director or responsible authority: they should improve the skills and capabilities of young scholars in scientific and technological literacy and in communication and productivity.

5. Quality in Reporting

Reporting quality must be ensured by avoiding bias, such as selective reporting, deliberate or not. Avoiding reporting insignificant data and outcomes could lead to severe distortion in the SR [32]. Thus, flaws in design, conduct, analysis, or reporting of RCTs can produce bias in the estimates of a treatment effect.
For example, in a large meta-epidemiological study of 1973 RCTs, a lack of blinding was associated with an average 22% exaggeration of treatment effects among trials that reported subjectively assessed outcomes [33]. This deviation is enough to adversely affect the interpretation of the results and further negatively influences regulatory settings and clinical practice.
Another example involves the evidence base on recent cancer drug approvals. Between 2014 and 2016, a quarter of the relevant studies were not RCTs; of the RCTs, the majority of them did not measure overall survival or quality of life outcomes as primary endpoints, and half of them were judged to be at high risk of bias; the authors’ judgments changed for a fifth of them when they relied on information reported in regulatory documents and scientific publications separately [34].

6. Strict Implementation of Rules in the Peer Review Processes

These processes first appeared in 1655 in a collection of scientific essays by Denis de Sallo in the Journal des Scavans, and almost 100 years later (1731), their implementation became a standard of practice by almost all biomedical journals [35]. Maintaining the quality and scientific integrity of publications; evaluating for competence, significance, and originality; and ensuring internal and external validity of submissions are crucial points. Similarly, the appropriate selection and training of reviewers to provide quality and specialized reviews without bias is an essential part of the process [36].

7. Sponsorship

Ethical and other issues surrounding sponsorships, to ensure credibility of a study, have been addressed in the past. One of the main sources of funding remains the industry [37]. Indeed, sponsored clinical research has always been questioned, influenced by reports of selective or biased disclosure of research results, ghostwriting and guest authorship, and inaccurate or incomplete reporting of potential conflicts of interest [38]. Although these may be a scarce incidence nowadays, active monitoring in funded studies should be implemented throughout in order to eliminate this possibility or any other conflicts of interest. An alarming analysis of 319 trials indicated that only a small minority (three out of 182 funded trials) were funded by multiple sponsors with competing interests. The presence of industry funding also almost tripled (OR = 2.8, 95% CI: 1.6, 4.7) the possibility of a study having reported favorable findings [39]. Furthermore, registered study protocols that announced funding were less likely to be published after their completion (non-publication rate: 32% vs. 18% [40].

8. Change in the Notion of Publishing

The change in notion and perception of the impact of outcomes is perhaps the most important part of the improvement in research conduct and implementation. This can be achieved through differentiation and modern adaptation of our scientific culture fighting inner and external incentives. Every scientific input should target a wider human benefit. A change in notion and incentives in publishing is crucial, from the level of the investigator aiming to publish/individual behaviors up to the social forces that provide affordances and incentives for those behaviors [41,42,43].
In a specific area of research, a clinical evaluation should precede publication in order to ensure relevance. A dramatic example is the scientific literature demonstrating an overload of various biomarkers for various diseases in which only a few of them have been confirmed by subsequent research and few have entered routine clinical practice [44]. In addition, biomarkers should also be judged on the grounds of cost-effectiveness and incremental net benefit [45]. Multiple indices may have comparable diagnostic accuracy, but their cost, an unavoidable concern in public health, may differ significantly.
Therefore, the selection of information to be published should be conducted on safer grounds and should be adequately supported by the authors, based on our knowledge on the scheme to date, and importantly, a summary of previous attempts should note the effective interventions and provide a concluding remark for the scientist through a good quality review.

9. Patient’s Contribution to Evaluation and Sex/Gender Analyses

In the era of evidence-based medicine, feedback from the recipients of healthcare development is gaining more importance and platforms for opinion exchange between patients and investigators have been established. This has already been implemented by the Cochrane Collaboration, where patient review is an integral part of the SR publication process and plain language summaries target a nontechnical audience. This process could be adopted as standard practice if accordingly modified. If the patient review, for example, is to be widely implemented in other journals, it would constitute a potentially radical paradigm shift that aims to solidify the review process. Of course, technical difficulties, such as acknowledgement and incentives for patients participating in review processes, are fields where further developments will enhance this policy [46].
It has been noted that the women population represents an “unequal majority” in health and health care. It is also well established that women’s health needs are dissimilar from those of men, resulting from the fact that both the woman’s body and brain functions differ critically from a man’s and that she reacts differently to even the same stimuli, such as medications or environmental events. It is indicative that, even though a large proportion of study protocols included women, only 3% of them planned an analytical approach for quantifying sex differences [47]; similarly, a recent report on therapies for atrial fibrillation concluded that the sex-specific reporting in trials comparing them was extremely low [48]. As a result, women have not received an ideal “personalized” health care, in many cases, so far. Thus, a specific design for studies on women’s health should be required.
There are several examples in the history of women’s health research where the contribution of the consumer women’s health movement in promoting research in women’s interests was critical. One of them concerned the collaborations between consumer groups and researchers in obtaining funding in the U.S. and France for a follow-up on a cohort of diethylstilboestrol-exposed people when the drug was discovered to be a transplacental carcinogen in pregnancy in 1971.
Another important issue is the nonavailability of sex/gender data from primary studies and consequently from SRs, which are the main tools to provide the necessary evidence for the formation of relevant policies [49]: the authors stated that even “Cochrane and the Campbell Collaboration have no specific policy on the reporting of sex/gender in systematic reviews, although Cochrane has endorsed the SAGER guidelines developed by the European Association of Science Editors” [50]. In their review, they found that the Methods sections of these collaborations included the most reports on sex/gender in both Campbell (50.8%) and Cochrane (83.1%) reviews, but the majority of these were descriptive considerations of sex/gender. They also reported that 62% of Campbell and 86% of Cochrane reviews did not report sex/gender in the abstract but included sex/gender considerations in a later section. A previous study on the subject reported that almost half of SRs described the sex/gender of the included populations but only 4% assessed sex/gender differences or included sex/gender when making judgments on the implications of the evidence [51].

10. An Improvement in the Dissemination of Studies

Despite advances in the dissemination of study information, half of health-related studies remain unpublished [52]. Problems in the publishing scheme in the selection of studies that appear to have a higher impact or that come from a respectable institution can lead to biased publishing. At the extreme, unsafe, ineffective, or even harmful interventions may enter clinical practice, as was the case with hormone replacement therapy [53]. In some instances, even a shift in healthcare resource allocation is reported [9]. It is standard practice in critical readings of literature to evaluate publication bias. This method attempts to address, with controversial success, precisely the unfortunate keenness of editors to promote positive results that imply novelty. A classic example of this inflation of positive and supposedly important results is the 2012 study by Fanelli [54], in which studies classified as related to clinical medicine showed a gradual increase in reporting positive findings. The author criticized the efficacy of measures taken to attenuate publication bias, e.g., protocol registration.
On the other hand, a respectable amount of research is published in other languages and not indexed in U.S. National Library of Medicine [55], while their quality remains controversial [56]; the authors of the above studies stated that peer review processes need to be improved through guidelines aiming to identify the authenticity of the studies.
The bulk of peer reviews remain a voluntary occupation, with the main motivation being recognition by peers. In addition, statistical review, a time-consuming process, is not performed in all published research. This process can be accelerated by practices that promote data and code sharing. It is also suggested that, even when papers are retracted, this could have been avoided with the simple measure of an active data sharing policy [57].

11. Role of the Stakeholders and Foundations

For the stakeholders and collaborative systems, a more energetic role is required in ensuring the conduct of multicenter massive-trials with increased clinical relevance. The main problem in the conduct of research is the lowered clinical value of the results from small sample sizes, even in RCTs. Mathematical models have been developed to predict sample sizes corresponding to the clinical value of the outcomes, while patient data from databases could easily increase the sample size of trials at much lower costs. Such paradigms could include the Health Care Systems Research Collaboratory and the Patient-Centered Outcomes Research Network (PCORnet) [58]. Also, new levels of patient engagement can raise the possibility of improving clinical outcomes on health. Involving multiple stakeholders (with potentially conflicting interests) in shared conversations on research has been proposed [59].
New foundations should be placed in research by focusing on the improvement of quality, such as NIH and PCORI [60]. The Cochrane Collaboration represents one of the very few large-scale initiatives in this context; importantly, both conduct high quality reviews, and participant education at all levels are based mostly from volunteers who care about science and high-quality evidence.

12. Cooperation of All Forces: The Role of Industry/Funding

The central point of problem is funding. USA-affiliated industry-funded trials and related activities represent more than 5% of US healthcare expenditure, with approximately $70 billion in commercial and $40 billion in governmental and non-profit funding annually. The NIH invests $41.7 billion annually in medical research: 80 percent is awarded for extramural research, through 50,000 competitive grants to more than 300,000 researchers at more than 2500 universities, medical schools, and other research institutions [61]. Concerns have been raised that this approach appears inefficient for how biomedical research is chosen, designed, regulated, financed, managed, disseminated, and reported [62,63,64].
The scheme, however, has been shifting in favor of Asian countries. Factors, such as ease of recruitment, population, and various epidemiological factors (e.g., increased incidence of infectious disease) have contributed positively to an inflation of local clinical trials [7]. Severe accusations regarding clinical data management have been raised, although the magnitude of the problem cannot be safely evaluated [65]. This unavoidably hinders the validity and future usefulness of these results despite initial enthusiasm from editors and the industry.
Economic forces are important, and ultimately, the industry seeks to maximize profit by providing new products and services to the medical market [66]. In industry-funded clinical research, intentional and unintentional commercial motives can control the study design and comparators. Governmental involvement [66] has an important role in distributing research funds in areas important for the protection and restoration of human health, even when the prospects for commercial profit are poor or nonexistent. The recruitment of specialized and qualified professionals should set higher standards of rigor when they are involved in commercial or unavoidably conflicted relationships and to disseminate the resources evenly, especially when nowadays these are scarce.
Funders and academic institutions are responsible for the moral status, as research usually initiates from there and determines any kind of shift in the process. Academics might be judged on the methodological rigor and full dissemination of their research, the quality of their reports, and the reproducibility of their findings. Previous reports suggest ways to increase the relevance and to optimize resource allocation in biomedical research, indicating how resource allocation should be conducted, along with revisions in the appropriateness of research design, methods, and analysis, with efficient research regulation and management fully accessible information, promoting unbiased and usable reports. Additionally, motivation must be given to authors to share their data [67], as has been performed in the field of genetics [68]. Of note, synthesis of evidence on the meta-epidemiological level cannot always confidently provide answers to practical clinical questions [69].
Compromised ethics should be traced and removed from independent research and academia, while journals should on no occasion put profit and publicity above quality. The solution to this lies on the progressive refinement of methods and improvement of the objective and controlled processes.

13. Training

Essential training and interprofessional learning of clinicians and other hands-on scientists in the medical field are an absolute must. There is a growing need to improve their scientific insight and judgment. Reviewers should learn how to apply an unbiased critical thinking and evaluation of the methods explored, of the study questions, and of the resulting impact towards good clinical practice and human welfare. This not only applies to organizational refinement by the Academic Institutes and Publishing Organizations but also to the scientists themselves to obtain the drive to train, along with methodologists and statisticians, so that specialization and knowledge is shared and every contributor works soundly towards a common cause.

14. Conclusions

Research is a solid foundation for the progression of sciences, and the key importance in maintaining the evolution of knowledge is “contributing and sharing”, but this has to be performed adequately. Although there are several criteria and controlled circumstances under which new data and overviews of data are published, research and publishing methods require continuous readjustments and modifications to ensure quality. An overview of the published literature on women’s health and its relevant subtopics is an excellent paradigm on a crucial field of the different types of research and publications that one may encounter but also an example of the vast variability in information available, not only in terms of results but also in terms of design, analysis, quality of information, and implementation of results. In clinical practice, it is imperative to assess information collectively a researcher, medical expert, funder, reviewer, and patient, and this should encourage the improvement of evidence-based patient management.
This review aimed to present the major nodal points of quality and to propose a combination of interventions at various levels, along with other routes of judgement. We also sought to address potential flaws and pitfalls in research conduct and to provide recommendations upon improvement of study designs/methods and scientific reporting to promote publication quality and stricter criteria for release with support from the appropriate structures. A summary of recommendations towards evidence implementation as presented in Table 1 could comprise valuable guidance to both the health experts and the health service recipients to which these standards are quality criteria. A meticulous study design that promotes the transparency of methods and potential conflicts allows a clear distinction of the pathologies and targeted groups and that provides substantial scientific background should be pursued by both researchers and readers. Robust implementation of the pre-stated methods and approaches of analysis, with active participation of collective fronts tied to the subject, should allow quality output to be published and should add value to the findings. Patient-first and common welfare should be considered throughout in conjunction with supporting and providing evidence on robust outcomes for the improvement of healthcare, that may be facilitated by healthy and network collaborations.
How these recommendations should be accounted for, evaluated, and implemented relies on the individual discretion of the reader, the scientist, the author, or any entity affiliated with a publishing organization and should be customized to be applied individually for each specialized academic and scientific field but also tailored across continents and countries. The latter is derived from the realization that research conduct, funding, and even the monitoring authorities of clinical studies rely on nonuniform procedures among countries and unions and conforms to different legal frameworks across countries. Nevertheless, a core of actions, precautions, and a quality exemplar of golden standards should be constructed and widely applied to meet the standards that describe a representative scientific contribution, for example, uniform, widely accepted, and practiced standards through policies, guidelines, and rules on a national and/or international level created either by in-country legislation or by scientific entities; allocation of the resources for their implementation; and mechanisms of control for their application and adherence by all.
In conclusion, multiple steps throughout the long and costly process of trial conduct are prone to bias. Notably, increasing international competition favors faster and cheaper patient recruitment, conduct, and analysis and, in turn, produces questionable research. Literature synthesis through SRs and/or meta-analysis has a primarily retrospective role that guides future research and sheds light on arguable topics but cannot erase the wrongdoings of primary studies, which are often concealed. The “bottom-up” approach of a wide dissemination of information to clinicians, together with practical incentives for stakeholders with competing interests to collaborate, promise to improve women’s healthcare.

Author Contributions

C.S. conceived and designed the study and prepared the manuscript. P.V. and V.K. contributed to the design and reporting of the research. All authors approved the final version of the manuscript.


This research received no external funding. The APC was funded by the first author.

Informed Consent Statement

Not applicable due to the nature of this study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors have no conflicts of interest.


  1. Stevens, G.A.; Mathers, C.D.; Beard, J.R. Global mortality trends and patterns in older women. Bull. World Health Organ. 2013, 91, 630–639. [Google Scholar] [CrossRef] [PubMed]
  2. McPherson, K.; Steel, C.M.; Dixon, J.M. ABC of breast diseases. Breast cancer-epidemiology, risk factors, and genetics. BMJ 2000, 321, 624–628. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Inskip, H.M.; Crozier, S.R.; Godfrey, K.M.; Borland, S.E.; Cooper, C.; Robinson, S.M.; Southampton Women’s Survey Study Group. Women’s compliance with nutrition and lifestyle recommendations before pregnancy: General population cohort study. BMJ 2009, 338, b481. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
  5. Nieminen, P.; Carpenter, J.; Rucker, G.; Schumacher, M. The relationship between quality of research and citation frequency. BMC Med. Res. Methodol. 2006, 6, 42. [Google Scholar] [CrossRef] [Green Version]
  6. Paulus, F.M.; Cruz, N.; Krach, S. The Impact Factor Fallacy. Front. Psychol. 2018, 9, 1487. [Google Scholar] [CrossRef]
  7. Ali, S.; Egunsola, O.; Babar, Z.U.D.; Hasan, S.S. Challenges of conducting clinical trials in Asia. Int. J. Clin. Trials 2018, 5, 194. [Google Scholar] [CrossRef]
  8. Bastian, H.; Glasziou, P.; Chalmers, I. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Med. 2010, 7, e1000326. [Google Scholar] [CrossRef]
  9. Chan, A.-W.; Song, F.; Vickers, A.; Jefferson, T.; Dickersin, K.; Gøtzsche, P.C.; van der Worp, H.B. Increasing value and reducing waste: Addressing inaccessible research. Lancet 2014, 383, 257–266. [Google Scholar] [CrossRef] [Green Version]
  10. Aragón, A.M. A measure for the impact of research. Sci. Rep. 2013, 3, 1649. [Google Scholar] [CrossRef] [Green Version]
  11. Greenwood, D.C. Reliability of journal impact factor rankings. BMC Med. Res. Methodol. 2007, 7, 48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Tsay, M.Y. The relationship between journal use in a medical library and citation use. Bull. Med. Libr. Assoc. 1998, 86, 31–39. [Google Scholar] [PubMed]
  13. Saha, S.; Saint, S.; Christakis, D.A. Impact factor: A valid measure of journal quality? J. Med. Libr. Assoc. 2003, 91, 42–46. [Google Scholar] [PubMed]
  14. Siristatidis, C.; Pouliakis, A. Flaws (and quality) in research today: Can artificial intelligence intervene? Syst. Biol. Reprod. Med. 2020, 66, 170–175. [Google Scholar] [CrossRef]
  15. Ioannidis, J.P.A. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016, 94, 485–514. [Google Scholar] [CrossRef] [Green Version]
  16. Ioannidis, J.P.A.; Chang, C.Q.; Lam, T.K.; Schully, S.D.; Khoury, M.J. The Geometric Increase in Meta-Analyses from China in the Genomic Era. PLoS ONE 2013, 8, e65602. [Google Scholar] [CrossRef] [Green Version]
  17. Olkin, I. Meta-Analysis: Current Issues in Research Synthesis. Stat. Med. 1996, 15, 1253–1257. [Google Scholar] [CrossRef]
  18. Cortinovis, M.; Perico, N.; Remuzzi, G. Should We Still Believe in Randomized Controlled Trials in Nephrology? Nephron 2017, 136, 281–286. [Google Scholar] [CrossRef]
  19. Khan, K. The CROWN Initiative: Journal editors invite researchers to develop core outcomes in women’s health. J. Ovarian Res. 2015, 8, 6. [Google Scholar] [CrossRef] [Green Version]
  20. Zarin, D.A.; Tse, T.; Williams, R.J.; Rajakannan, T. Update on Trial Registration 11 Years after the ICMJE Policy Was Established. N. Engl. J. Med. 2017, 376, 383–391. [Google Scholar] [CrossRef] [Green Version]
  21. Dwan, K.; Altman, D.G.; Clarke, M.; Gamble, C.; Higgins, J.P.; Sterne, J.A.; Williamson, P.R.; Kirkham, J.J. Evidence for the selective reporting of analyses and discrepancies in clinical trials: A systematic review of cohort studies of clinical trials. PLoS Med. 2014, 11, e1001666. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Nosek, B.A.; Alter, G.; Banks, G.C.; Borsboom, D.; Bowman, S.D.; Breckler, S.J.; Yarkoni, T. Promoting an open research culture. Science 2015, 348, 1422–1425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Tierney, J.F.; Vale, C.; Riley, R.; Smith, C.T.; Stewart, L.; Clarke, M.; Rovers, M. Individual Participant Data (IPD) Meta-analyses of Randomised Controlled Trials: Guidance on Their Use. PLoS Med. 2015, 12, e1001855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Fayard, F.; Petit, C.; Lacas, B.; Pignon, J.P. Impact of missing individual patient data on 18 meta-analyses of randomised trials in oncology: Gustave Roussy experience. BMJ Open 2018, 8, e020499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Reade, M.C.; Delaney, A.; Bailey, M.J.; Angus, D.C. Bench-to-bedside review: Avoiding pitfalls in critical care meta-analysis--funnel plots, risk estimates, types of heterogeneity, baseline risk and the ecologic fallacy. Crit. Care 2008, 12, 220. [Google Scholar] [CrossRef] [Green Version]
  26. Stewart, L.A.; Clarke, M.; Rovers, M.; Riley, R.D.; Simmonds, M.; Stewart, G. PRISMA-IPD Development Group. Preferred Reporting Items for a Systematic Review and Meta-analysis of Individual Participant Data. JAMA 2015, 313, 1657. [Google Scholar] [CrossRef]
  27. Barnish, M.S.; Turner, S. The value of pragmatic and observational studies in health care and public health. Pragmat. Obs. Res. 2017, 8, 49–55. [Google Scholar] [CrossRef] [Green Version]
  28. Gingras, Y.; Larivière, V.; Macaluso, B.; Robitaille, J.-P. The effects of aging on researchers’ publication and citation patterns. PLoS ONE 2008, 3, e4048. [Google Scholar] [CrossRef] [Green Version]
  29. Powell, K. Publish like a pro. Nature 2010, 467, 873–875. [Google Scholar] [CrossRef] [Green Version]
  30. Glover, N.M.; Antoniadi, I.; George, G.M.; Götzenberger, L.; Gutzat, R.; Koorem, K.; Liancourt, P.; Rutowicz, K.; Saharan, K.; You, W.; et al. A Pragmatic Approach to Getting Published: 35 Tips for Early Career Researchers. Front. Plant Sci. 2016, 7, 610. [Google Scholar] [CrossRef] [Green Version]
  31. Siristatidis, C.; Gagari, E. Residents’ training and the uncomfortable truths: A program director’s perspective. Arch. Gynecol. Obstet. 2011, 284, 785–786. [Google Scholar] [CrossRef] [PubMed]
  32. Koletsi, D.; Karagianni, A.; Pandis, N.; Makou, M.; Polychronopoulou, A.; Eliades, T. Are studies reporting significant results more likely to be published? Am. J. Orthod. Dentofac. Orthop. 2009, 136, 632.e1–632.e5. [Google Scholar] [CrossRef]
  33. Savović, J.; Jones, H.E.; Altman, D.G.; Harris, R.J.; Jüni, P.; Pildal, J.; Als-Nielsen, B.; Balk, E.M.; Gluud, C.; Gluud, L.L.; et al. Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann. Intern. Med. 2012, 157, 429–438. [Google Scholar] [CrossRef] [PubMed]
  34. Naci, H.; Davis, C.; Savović, J.; Higgins, J.; Sterne, J.; Gyawali, B.; Romo-Sandoval, X.; Handley, N.; Booth, C.M. Design characteristics, risk of bias, and reporting of randomised controlled trials supporting approvals of cancer drugs by European Medicines Agency, 2014–2016: Cross sectional analysis. BMJ 2019, 366, l5221. [Google Scholar] [CrossRef] [Green Version]
  35. Kronick, D.A. Peer review in 18th-century scientific journalism. JAMA 1990, 263, 1321–1322. [Google Scholar] [CrossRef]
  36. Manchikanti, L.; Kaye, A.D.; Boswell, M.V.; Hirsch, J.A. Medical journal peer review: Process and bias. Pain Physician 2015, 18, E1–E14. [Google Scholar]
  37. Mansi, B.A.; Clark, J.; David, F.S.; Gesell, T.M.; Glasser, S.; Gonzalez, J.; Zecevic, M. Ten recommendations for closing the credibility gap in reporting industry-sponsored clinical research: A joint journal and pharmaceutical industry perspective. Mayo Clin. Proc. 2012, 87, 424–429. [Google Scholar] [CrossRef] [Green Version]
  38. Pyke, S.; Julious, S.A.; Day, S.; O’Kelly, M.; Todd, S.; Matcham, J.; Seldrup, J. The potential for bias in reporting of industry-sponsored clinical trials. Pharm. Stat. 2011, 10, 74–79. [Google Scholar] [CrossRef]
  39. Flacco, M.E.; Manzoli, L.; Boccia, S.; Capasso, L.; Aleksovska, K.; Rosso, A.; Ioannidis, J.P.A. Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor. J. Clin. Epidemiol. 2015, 68, 811–820. [Google Scholar] [CrossRef]
  40. Jones, C.W.; Handler, L.; Crowell, K.E.; Keil, L.G.; Weaver, M.A.; Platts-Mills, T.F. Non-publication of large randomized clinical trials: Cross sectional analysis. BMJ Clin. Res. Ed. 2013, 347, f6104. [Google Scholar] [CrossRef] [Green Version]
  41. Sarewitz, D. The pressure to publish pushes down quality. Nature 2016, 533, 147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Smaldino, P.E.; McElreath, R. The natural selection of bad science. R. Soc. Open Sci. 2016, 3, 160384. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Wilson, D.S.; Hayes, S.C.; Biglan, A.; Embry, D.D. Evolving the future: Toward a science of intentional change. Behav. Brain Sci. 2014, 37, 395–416. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Scherer, A. Reproducibility in biomarker research and clinical development: A global challenge. Biomark. Med. 2017, 11, 309–312. [Google Scholar] [CrossRef] [Green Version]
  45. Scott, M.G. When do new biomarkers make economic sense? Scand. J. Clin. Lab. Investig. 2010, 70 (Suppl. 242), 90–95. [Google Scholar] [CrossRef]
  46. Staniszewska, S.; Stephens, R.; Flemyng, E. Developing the infrastructure for patient review in academic journals. Res. Involv. Engagem. 2018, 4, 31. [Google Scholar] [CrossRef] [Green Version]
  47. Holdcroft, A. Gender bias in research: How does it affect evidence based medicine? J. R. Soc. Med. 2007, 100, 2–3. [Google Scholar] [CrossRef]
  48. Du Fay de Lavallaz, J.; Clerc, O.; Pudenz, C.; Illigens, B.; Kühne, M. Sex-specific efficacy and safety of cryoballoon versus radiofrequency ablation for atrial fibrillation: A systematic review and meta-analysis. J. Cardiovasc. Electrophysiol. 2019, 30, 1819–1829. [Google Scholar] [CrossRef]
  49. Petkovic, J.; Trawin, J.; Dewidar, O.; Yoganathan, M.; Tugwell, P.; Welch, V. Sex/gender reporting and analysis in Campbell and Cochrane systematic reviews: A cross-sectional methods study. Syst. Rev. 2018, 7, 113. [Google Scholar] [CrossRef] [Green Version]
  50. European Association of Science Editors (EASE) Gender Policy Committee. GPC Endorsement, Adoption Collaboration. Available online: (accessed on 10 December 2020).
  51. Welch, V.; Petticrew, M.; Ueffing, E.; Benkhalti Jandu, M.; Brand, K.; Dhaliwal, B.; Kristjansson, E.; Smylie, J.; Wells, G.A.; Tugwell, P. Does consideration and assessment of effects on health equity affect the conclusions of systematic reviews? A methodology study. PLoS ONE 2012, 7, e31360. [Google Scholar] [CrossRef] [Green Version]
  52. Song, F.; Parekh, S.; Hooper, L.; Loke, Y.K.; Ryder, J.; Sutton, A.J.; Hing, C.; Kwok, C.S.; Pang, C.; Harvey, I. Dissemination and publication of research findings: An updated review of related biases. Health Technol. Assess. 2010, 14, 1–193. [Google Scholar] [CrossRef] [PubMed]
  53. Lawton, B.; Rose, S.; McLeod, D.; Dowell, A. Changes in use of hormone replacement therapy after the report from the Women’s Health Initiative: Cross sectional survey of users. BMJ 2003, 327, 845–846. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Fanelli, D. Negative results are disappearing from most disciplines and countries. Scientometrics 2012, 90, 891–904. [Google Scholar] [CrossRef]
  55. Xia, J.; Wright, J.; Adams, C.E. Five large Chinese biomedical bibliographic databases: Accessibility and coverage. Health Inf. Libr. J. 2008, 25, 55–61. [Google Scholar] [CrossRef] [PubMed]
  56. Wu, T.; Li, Y.; Bian, Z.; Liu, G.; Moher, D. Randomized trials published in some Chinese journals: How many are randomized? Trials 2009, 10, 46. [Google Scholar] [CrossRef] [Green Version]
  57. Moylan, E.C.; Kowalczuk, M.K. Why articles are retracted: A retrospective cross-sectional study of retraction notices at BioMed Central. BMJ Open 2016, 6, e012047. [Google Scholar] [CrossRef] [Green Version]
  58. PCORnet, the National Patient-Centered Clinical Research Network—PCORnet. (n.d.). Available online: (accessed on 13 April 2020).
  59. Sugarman, J.; Califf, R.M. Ethics and Regulatory Complexities for Pragmatic Clinical Trials. JAMA 2014, 311, 2381. [Google Scholar] [CrossRef]
  60. Ioannidis, J.P.A.; Fanelli, D.; Dunne, D.D.; Goodman, S.N. Meta-research: Evaluation and Improvement of Research Methods and Practices. PLoS Biol. 2015, 13, e1002264. [Google Scholar] [CrossRef] [Green Version]
  61. National Institutes of Health, Budget. Available online: (accessed on 5 January 2021).
  62. Packalen, M.; Bhattacharya, J. NIH funding and the pursuit of edge science. Proc. Natl. Acad. Sci. USA 2020, 117, 12011–12016. [Google Scholar] [CrossRef]
  63. Nobe, P.; Ten Eyck, P.; Roskoski, R., Jr.; Jackson, J.B. NIH funding trends to US medical schools from 2009 to 2018. PLoS ONE 2020, 15, e0233367. [Google Scholar] [CrossRef]
  64. Dorsey, E.R.; de Roulet, J.; Thompson, J.P.; Reminick, J.I.; Thai, A.; White-Stellato, Z.; Moses, H. Funding of US Biomedical Research, 2003–2008. JAMA 2010, 303, 137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Woodhead, M. 80% of China’s clinical trial data are fraudulent, investigation finds. BMJ 2016, 355, i5396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Macleod, M.R.; Michie, S.; Roberts, I.; Dirnagl, U.; Chalmers, I.; Ioannidis, J.P.A.; Glasziou, P. Biomedical research: Increasing value, reducing waste. Lancet 2014, 383, 101–104. [Google Scholar] [CrossRef]
  67. Ross, J.S.; Lehman, R.; Gross, C.P. The Importance of Clinical Trial Data Sharing. Circ. Cardiovasc. Qual. Outcomes 2012, 5, 238–240. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Sorani, M.D.; Yue, J.K.; Sharma, S.; Manley, G.T.; Ferguson, A.R.; Cooper, S.R.; Yuh, E.L. Genetic Data Sharing and Privacy. Neuroinformatics 2015, 13, 1–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Greco, T.; Zangrillo, A.; Biondi-Zoccai, G.; Landoni, G. Meta-analysis: Pitfalls and hints. Heart Lung Vessel 2013, 5, 219–225. [Google Scholar] [PubMed]
Table 1. Summary of the recommendations for the steps towards evidence implementation.
Table 1. Summary of the recommendations for the steps towards evidence implementation.
Study DesignsDeclaration of competing interests
Prospective and individual patient data meta-analyses
Incorporation of evaluated observational research
Inclusion of women and explicit analysis of gender differences
Study ConductionPre-specification of outcomes
Abidance by protocols
Transparency in reporting
Inclusion of young trained authors
ReviewTraining and motivations for peer reviewers
Active participation of patients
Active monitoring in funded studies
Stricter criteria to promote publication quality
DisseminationPublication of both negative and positive results
Clinical evaluation should precede publication
Every scientific input should target a wider human benefit
Groups to check for unpublished data
Multiple stakeholders cooperation
Implementation of FindingsTraining of clinicians
Critical attitude towards
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Siristatidis, C.; Karageorgiou, V.; Vogiatzi, P. Current Issues on Research Conducted to Improve Women’s Health. Healthcare 2021, 9, 92.

AMA Style

Siristatidis C, Karageorgiou V, Vogiatzi P. Current Issues on Research Conducted to Improve Women’s Health. Healthcare. 2021; 9(1):92.

Chicago/Turabian Style

Siristatidis, Charalampos, Vasilios Karageorgiou, and Paraskevi Vogiatzi. 2021. "Current Issues on Research Conducted to Improve Women’s Health" Healthcare 9, no. 1: 92.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop