(2) Development of Tool
Due to the wide accessibility of Microsoft Excel, this was chosen as the
platform for the COR Tool. A basic spreadsheet was created, which
through four steps, enables a researcher to determine the
comprehensiveness of outcome reporting in a given study.
- Step 1: The researcher(s) first enter basic data pertaining to
the clinical trial or systematic review being conducted, and details
on a core outcome set for the condition, if available for reference.
- Step 2: Acknowledging that not all outcome areas are relevant
to each condition – for example, the outcome domain of ‘social
functioning’ may not be applicable to the fetus, although may remain
pertinent to the mother – the researchers, in the second step,
determine which outcome domains, under the six different core areas,
could be affected by the intervention(s) (the user is asked to simply
decide between ‘Yes’ and ‘No’). It is recommended that, as far as
possible, all outcome areas that could even remotely be affected by
the intervention, are included. Any outcome that is deemed not
applicable is automatically greyed out in all further steps.
- Step 3: While four out of the six core areas have
pre-determined outcome domains that researchers can readily assess,
the core areas comprising physiological/clinical outcomes and adverse
events/effects, could have numerous relevant outcomes, which
researchers will need to populate. For example, the
‘physiological/clinical’ outcome area lists 24 outcome domains
(organ-systems), each of which could include numerous clinically
relevant outcomes. Researchers are required to select and list up to
four maternal and four fetal-neonatal outcomes for each of these two
core areas that would be vital to making decisions regarding clinical
practice and health policy. This is akin to the subjective selection
of the most- and second most important factor that would need to be
controlled for in an observational study, as part of ROB assessment in
the Newcastle-Ottawa Scale.18 Researchers can opt to
select fewer than four outcomes, in which case, the residual slots
will remain greyed out for all subsequent steps of the tool.
- Step 4: While reporting of outcomes is crucial to the conduct
of a trial, the standardization of outcome measurement is at least as
crucial. In this step, researchers stipulate acceptable standards for
measurement for each outcome, thereby ensuring that the tool not only
assesses the comprehensive outcome reporting, but also that of
measurement.
The above process allows the user to define the reference standard for
comprehensive outcome reporting, allowing flexibility and use of the
tool in the specific context of the condition and intervention(s) being
studied. A trialist, looking to select outcomes for a trial, can use
this tool directly to obtain a full list of relevant outcomes,
comprising all outcome areas, as presented in Table 1, that should be
reported in a trial. A systematic reviewer, critical appraiser, or
clinician can use this tool to assess each trial for comprehensiveness
of outcome inclusion and measurement based on (1) if they were reported,
(2a) if they were measured, and (2b) whether the measurement/definition
was in keeping with the pre-specified standards. For purposes of a
systematic review, each clinical trial will be assessed using a separate
programmed excel sheet. The current iteration of the tool allows for
inclusion of up to fifty studies. There are four steps to assessing
individual trials, using this reference as the standard for comparison.
Technical Considerations: Once data is entered into the excel
spread sheet, scores related to outcome reporting and measurement are
calculated for each domain area, and this is translated to a colour
gradient. These scores and colours are then subsequently transferred to
generate a heatmap that represents all studies being assessed, serving
as the output analysis of the tool. For each outcome area (OA), the
scoring formula is as follows and is adjusted to account for any outcome
domains (OD) that have been determined to be “not applicable”:\(\left(\frac{\#\ of\ Reported\ OD\ within\ OA}{\#\ of\ Relevant\ OD\ within\ OA}+\frac{\#\ of\ Properly\ Measured\ OD\ within\ OA}{\#\ of\ Relevant\ OD\ within\ OA}\right)\).
This formula helps to score each outcome area into quartiles of
comprehensiveness. This translates into the colours on the heatmap, with
yellow highlighting outcome areas that have been unrepresented, and a
gradient of blue representing outcome areas that are reported, with
darker shades of blue indicating more complete reporting within that
particular outcome area. Outcome domains are not shown on the heatmap
individually, but rather as part of one of the six outcome areas it
belongs to. Any outcome area that has been deemed not applicable in
their entirety is automatically shown as grey. An effort to avoid red
and green was made to increase accessibility for users who may find
distinguishing these colours challenging.
Finally, the COR tool will assess a few additional items, not directly
related to comprehensiveness of outcome reporting. These are as follows:
- If a core outcome set was developed for the condition, were all
outcomes of the core outcome set used in the trial? : A core outcome
set is a standard, minimum set of outcomes derived through patient-
and stakeholder input, which must be reported in all trials on the
topic.2 Although we have recently shown that despite
adherence to guidelines on conduct and reporting, these COS for
obstetric conditions do not necessarily represent comprehensiveness of
outcome reporting.16 This question will assess
adherence to a published core outcome set, which is now considered to
be a bare minimum that should be reported.
- Were intermediary or surrogate outcomes reported?: Surrogate
outcomes that are cheaper to measure and can provide robust
statistical significance, are sometimes chosen by trialists, in favour
of patient-centric or clinically meaningful end-points such as death
or functional capacity.19, 20 In a systematic review
of 109 trials that used surrogates as a primary outcome, only 35%
discussed their clinical relevance and rationale for
inclusion.21 Where a trial seems to be deficient in
the inclusion of core outcome areas, this section will provide
information on whether the scope of the trial was merely to study
associations between interventions and surrogate measures that may be
directly or indirectly related to patient-centric outcomes, thereby
assisting the researcher in drawing relevant conclusions.
- Were the study conclusions supported by the reported outcomes? :
Sometimes, obstetric trials claim the benefit of one intervention over
another based on a narrow set of outcomes, for example only maternal
outcomes and no neonatal outcomes. While it must be acknowledged that
due to funding-, resource- and time constraints, not every trial can
measure outcomes from all areas, it is still important that
conclusions drawn, clearly state these limitations and do not
generally conclude that a certain intervention is the preferred
intervention.
- Was the abridged conclusion in the abstract, an accurate
representation of the scope of the study, based on outcomes
selected? : Sometimes, although manuscripts may draw appropriate
conclusions that consider the study’s scope and limitations, the rigid
word count in the manuscript abstract, which is often the only part of
the paper that is read by busy clinicians, does not accurately
represent the study findings.22 These omissions
could influence clinical practice and patient care, and therefore need
to be addressed.
These questions are accompanied by a drop-down menu of pre-generated
options, each response associated with a shade of burgundy represented
in the final heatmap, to provide additional information regarding the
reporting of outcomes. Darker shades of burgundy were chosen to indicate
‘better quality’ outcome reporting.
After programming the tool, each aspect was systematically tested to
refine user interface and troubleshoot any programming glitches with
scoring and heatmap generation.