Next Article in Journal
Breast Cancer Screening and Perceptions of Harm among Young Adults in Japan: Results of a Cross-Sectional Online Survey
Previous Article in Journal
Performance-Status Deterioration during Sequential Chemo-Radiotherapy as a Predictive Factor in Locally Advanced Non-Small Cell Lung Cancer
 
 
Article
Peer-Review Record

Adherence to CONSORT Guidelines and Reporting of the Determinants of External Validity in Clinical Oncology Randomized Controlled Trials: A Review of Trials Published in Four Major Journals between 2013 and 2015

Curr. Oncol. 2023, 30(2), 2061-2072; https://doi.org/10.3390/curroncol30020160
by Sophie Audet 1, Catherine Doyle 2,3,4, Christopher Lemieux 2,3,4, Marc-Antoine Tardif 5, Andréa Gauvreau 3, David Simonyan 4, Hermann Nabi 4 and Julie Lemieux 2,3,4,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Curr. Oncol. 2023, 30(2), 2061-2072; https://doi.org/10.3390/curroncol30020160
Submission received: 9 December 2022 / Revised: 18 January 2023 / Accepted: 26 January 2023 / Published: 8 February 2023

Round 1

Reviewer 1 Report

Clinical trial findings are in general not generalizable; this is the reason for "real-world" data. Moreover, screen-failure, for whatever reasons, is an expected occurrence in the conduct of clinical trials. The authors of this manuscript had discovered that only 51.8% out of 456 clinical oncology RCT published between 2013 and 2015 in 4 major journals provided information for assessing screen-failure rate and in those reporting the entire enrolment process, 22% failed to allow insights into reasons behind screen-failure. The majority of publication omitted a discussion on generalizability.

My main concern would be to understand the shortcoming of what was discovered in the manuscript itself: how many studies were "screened"/"screened-failed" and how generalizable were the findings? 

The most valuable lessons out of this present study are the clear recommendation for the next revision in the CONSORT statement; I believe the authors could expand on this.

 

 

 

 

Author Response

Clinical trial findings are in general not generalizable; this is the reason for "real-world" data. Moreover, screen-failure, for whatever reasons, is an expected occurrence in the conduct of clinical trials. The authors of this manuscript had discovered that only 51.8% out of 456 clinical oncology RCT published between 2013 and 2015 in 4 major journals provided information for assessing screen-failure rate and in those reporting the entire enrolment process, 22% failed to allow insights into reasons behind screen-failure. The majority of publication omitted a discussion on generalizability.

            Response: We thank the Reviewer for taking the time to review our manuscript and for the comments.

 

My main concern would be to understand the shortcoming of what was discovered in the manuscript itself: how many studies were "screened"/"screened-failed" and how generalizable were the findings?

            Response: We thank the Reviewer for the comment. As shown in Figure 2 and the Results, among the 592 trials initially identified, 136 were excluded: 94 reporting on subgroup, secondary endpoint, or follow-up studies, 10 interim results reports, 13 papers reporting results from multiple studies simultaneously, 29 trials investigating non-therapeutic interventions, and three trials in which the unit of randomization was not individual patients.

From the 456 included studies, the number of patients assessed for eligibility before randomization was reported in the main article in 219 trials, while 17 trials reported this information in a supplementary appendix, for a total of 236 trials (51.8%; 236/456). Among the 236 trials that reported the proportion of patients assessed for eligibility, 78% (184/236 trials) mentioned the reasons for patient exclusion.

 Of course, the generalizability of the present study is limited to oncology trials. Nevertheless, they were published in four major journals in oncology that maintain high publication standards. Whether the present study findings could be generalizable to more modest journals and to other fields than oncology remains unknown. Additional studies are necessary to examine these issues. We added some discussion about that.

 

The most valuable lessons out of this present study are the clear recommendation for the next revision in the CONSORT statement; I believe the authors could expand on this.

            Response: We agree with the Reviewer. Indeed, there seemed to be a confusion in the articles we reviewed between the terms “assessed for eligibility/screened” and “enrolled/registered”. The enrollment process of a RCT consists of three main steps; the first is to define a target population, the second is to screen the potential participants to determine their eligibility, and the third is to invite eligible patients to enroll. The terms are not interchangeable. We feel that this could be the object of a clarification in the next revision of the CONSORT statement. The importance of the terms and steps of the enrollment process should be thoroughly emphasized, as well as the importance of using them appropriately. Examining the numbers of excluded patients against the exclusion criteria is crucial to the transparency and evaluation of the reliability of the enrollment process. Reporting the entire enrollment process should be mandatory. In addition, that the recommendation from CONSORT to provide details regarding the enrollment process is found in the suggested flow diagram, but not in the checklist. They should be added to the checklist. Indeed, it provides an estimation of the proportion of all potentially eligible people who met the study requirements, which may indicate how stringent inclusion and exclusion criteria were applied to the selection of participants. It also allows detecting arbitrary exclusions, which may introduce selection biases and then affect the representativeness of the participants included in the trial. Finally, it gives the opportunity to know the number of participants who refused to be enrolled in the study. Thus, details of the enrollment process are not only relevant to assess the generalizability of trial findings, but also to optimize recruitment to RCTs by helping to identify potential obstacles to accrual.

Reviewer 2 Report

Congratulations on this article! I understand that an enormous amount of work is being done in an article like this, but given the period of data collection (2013-2015), as well as the selection method of the analyzed journals, everyone should take into account the low addressability the article will have.

There are some minor aspects that need to be reviewed:

Add the references at the end of the statement.

Table 1 and figure 1, put them after the paragraph in which they are mentioned. The same aspect for the other tables and figures

Line 84 - standardized form. I would like to be sent a model of it as an additional file.

Figure 2 – must be redone so that it looks like a flow chart.

Pay attention to the writing of references.

Author Response

Congratulations on this article! I understand that an enormous amount of work is being done in an article like this, but given the period of data collection (2013-2015), as well as the selection method of the analyzed journals, everyone should take into account the low addressability the article will have.

            Response: We thank the Reviewer for taking the time to review our manuscript and for the comments.

 

There are some minor aspects that need to be reviewed:

Add the references at the end of the statement.

            Response: We thank the Reviewer for the comment. Still, we fear to misunderstand the Reviewer. If the Reviewer meant the references cited in the article, they were already presented at the end of the manuscript, after the Statements. If the Reviewer meant the 456 references included in the study, we fear that the reference list would be far too long. Instead, we suggest providing them as Supplementary Material.

 

Table 1 and figure 1, put them after the paragraph in which they are mentioned. The same aspect for the other tables and figures

            Response: We thank the Reviewer. The Tables and Figures were moved accordingly.

 

Line 84 - standardized form. I would like to be sent a model of it as an additional file.

            Response: We now provide the MS Excel template that was used for data extraction.

 

Figure 2 – must be redone so that it looks like a flow chart.

            Response: We thank the Reviewer. The figure was redone.

 

Pay attention to the writing of references.

            Response: We thank the Reviewer for the comment. As above, we fear to misunderstand the Reviewer. If the Reviewer meant the references cited in the article, they were already presented at the end of the manuscript, after the Statements. If the Reviewer meant the 456 references included in the study, we fear that the reference list would be far too long. Instead, we suggest providing them as Supplementary Material.

 

Round 2

Reviewer 1 Report

Thank you for the revised manuscript; the added Discussion provides a clearer insight of your interpretation of the findings of the study.

Suggestions to update/further clarify:

1. "withdrew consent" to replace "refused" in line 55, page 2/14.

2. "Contrary" to replace "Compared" in line 257, page 11/14.

3. "screened " to replace "eligible in line 264, page 11/14? Patients eligible are to meet all eligibility criteria.

4. "did not find" due to trials failing to report vs all trials reported no specific eligibility criteria to have impeded enrolment?

5. Reason for screen failure is usually captured and collection may not have been challenging, re line 284, page 12/14. I believe there may have been a lack of emphasis or recognition of its importance to be reporting these data.

6. Data relating to "pre-screening" may be challenging to collect (including those who did not consent); the process of "pre-screening" is subject o selection bias as Investigators try to avoid screen failure. I suspect however the fraction of eligible patients are not underestimated: it is always near impossible to turn an ineligible patients eligible; paragraph from line 284, page 12/14.

7. The explanation of the RCT process may be improved (from line 308, page 12/14): Only "subjects" having accepted the invitation to enrol would be screened. In a sense, the first step is akin to "pre-screening" ad the eligibility criteria guide the Investigators to select the target population to approach to consent. The number assessed for eligibility (ie "screened") would always be larger than the number "enrolled/registered".

8. what about inclusion criteria, line 317, page 12/14; not meeting inclusion criteria is a reason for exclusion as well.

9. Would you be able to explain why there was a difference between the four selected journals in the rate of reporting number of subjects screened? Is the reporting standard different among the four selected journals? (paragraph from lines 322, page 12/14)

10. I would respectfully disagree on the conclusion drawn that "to collect and report data on all patients approached" (line 332, page 13/14) unless the term "approached" referred to those "consented (to proceed with screening and subsequently enrolled if found eligible)"

 

 

Author Response

Reviewer #1

 

  1. "withdrew consent" to replace "refused" in line 55, page 2/14.
  2. "Contrary" to replace "Compared" in line 257, page 11/14.
  3. "screened " to replace "eligible in line 264, page 11/14? Patients eligible are to meet all eligibility criteria.?

            Response: We thank the Reviewer. They were revised as suggested.

 

  1. "did not find" due to trials failing to report vs all trials reported no specific eligibility criteria to have impeded enrolment?

            Response: We thank the Reviewer. It was revised as “A study looking specifically at which eligibility criteria was a barrier to the recruitment of patients in the trial could not identify a unique category of eligibility criteria precluding enrollment as all trials reported no shared specific eligibility criteria impeding enrolment [15].”.

 

  1. Reason for screen failure is usually captured and collection may not have been challenging, re line 284, page 12/14. I believe there may have been a lack of emphasis or recognition of its importance to be reporting these data.

            Response: We agree with the Reviewer. It was revised as “However, we recognize that it is challenging to collect these data, especially in large multicenter trials. Still, the reasons for screen failure are usually well captured in large multicenter clinical trials, and the data should be made available.”

 

  1. Data relating to "pre-screening" may be challenging to collect (including those who did not consent); the process of "pre-screening" is subject o selection bias as Investigators try to avoid screen failure. I suspect however the fraction of eligible patients are not underestimated: it is always near impossible to turn an ineligible patients eligible; paragraph from line 284, page 12/14.

            Response: The Reviewer raises a good point. It was revised as “The investigators should try to record the number of people identified as potentially eligible in the pre-screening, so we can estimate the number of patients that need to be screened for every patient enrolled in the study. Nevertheless, the number of patients to whom a physician offers the trial and have declined to participate before being screened for eligibility would be difficult to collect in a meaningful way. The process of pre-screening is subject to selection bias.”

 

  1. The explanation of the RCT process may be improved (from line 308, page 12/14): Only "subjects" having accepted the invitation to enrol would be screened. In a sense, the first step is akin to "pre-screening" ad the eligibility criteria guide the Investigators to select the target population to approach to consent. The number assessed for eligibility (ie "screened") would always be larger than the number "enrolled/registered".

            Response: We agree with the Reviewer. It was revised as “The enrollment process of an RCT consists of three main steps; the first is to define a target population (i.e., a pre-screening in which the eligibility criteria guide the investigators to select the target population to approach to consent), the second is to screen the potential participants to determine their eligibility (the numbers of screened patients will always be larger than the numbers of enrolled patients), and the third is to invite eligible patients to enroll.”

 

  1. what about inclusion criteria, line 317, page 12/14; not meeting inclusion criteria is a reason for exclusion as well.

            Response: We agree: “The importance of the terms and steps of the enrollment process should be thoroughly emphasized and the importance of using them appropriately. Examining the numbers of excluded patients against the unmet inclusion criteria and the met exclusion criteria is crucial to the transparency and evaluation of the reliability of the enrollment process.”

 

  1. Would you be able to explain why there was a difference between the four selected journals in the rate of reporting number of subjects screened? Is the reporting standard different among the four selected journals? (paragraph from lines 322, page 12/14)

            Response: We thank the Reviewer for the comment. There is one journal with a low percentage (30% for JNCI-see table 4) but this journal had only a small number of trials published. The reporting standards are not explicitly different among the four journals to our knowledge.

 

  1. I would respectfully disagree on the conclusion drawn that "to collect and report data on all patients approached" (line 332, page 13/14) unless the term "approached" referred to those "consented (to proceed with screening and subsequently enrolled if found eligible)"

            Response: We agree, and we clarified the conclusion. It was revised as suggested. “In order to facilitate the evaluation of the generalizability of trial results, investigators should be encouraged to collect and report data on all patients who consented to proceed with screening and subsequently enrolled if found eligible in the trial; if not enrolled, the reasons why they were not recruited should be reported as well.”

Round 3

Reviewer 1 Report

I have no further comments. I believe the conclusions that can be drawn from the study would inspire further consideration and discussion for the readers.

Back to TopTop