Understanding Bias in Primary Studies for Systematic Reviews

When conducting a systematic review, it's crucial to critically evaluate the quality of the primary studies included. Bias in these studies can distort the results and lead to misleading conclusions. This friendly guide will walk you through the essential aspects of understanding and dealing with bias in primary studies when conducting systematic reviews.

What is Bias in Primary Studies?

Bias in research refers to any systematic error in the design, conduct, or analysis of a study that can lead to inaccurate conclusions. It can occur at any stage of the research process, including data analysis and publication, and can significantly impact the reliability and validity of the findings. Bias is not simply a yes or no question; it's important to consider the degree and potential impact of bias on the study results.

Imagine a study investigating the effectiveness of a new drug. If the researchers unintentionally select healthier participants for the treatment group, the results might show the drug is more effective than it actually is. This is an example of selection bias, which we'll discuss in more detail later.

How Bias in Primary Studies Can Affect Systematic Reviews

Systematic reviews aim to synthesize the results of multiple primary studies to provide a comprehensive and unbiased overview of a research question. However, if the primary studies included in the review are biased, the conclusions of the systematic review will also be biased. This can lead to inaccurate or misleading recommendations for healthcare practice or policy decisions.

For example, if a systematic review includes several studies with a high risk of bias due to poor study design or selective reporting, the review might overestimate or underestimate the true effect of a treatment, affecting the reliability of the review's findings. This could lead to the adoption of ineffective or even harmful treatments. It's important to remember that a source of bias may even vary in direction and magnitude across studies.

Furthermore, bias can be introduced in the initial steps of a systematic review. When primary resources are chosen for inclusion in a study, a risk of bias assessment for each of these primary studies must be done. Failure to critically appraise each of the primary studies by a reviewer can result in the accumulation of bias in the final outcomes of the systematic review. Additionally, problems with the comparability of participants or populations in a study can introduce selection bias, further impacting the review's findings.

Common Types of Bias in Primary Studies

There are several types of bias that can occur in primary studies. Here are some of the most common ones:

Selection Bias

Selection bias occurs when the study population does not accurately represent the target population. This can happen due to flaws in the study design or implementation, such as non-random selection of participants or differences between groups in how participants are selected.

It's also important to consider 'grey literature', which includes unpublished studies or data. Excluding grey literature can lead to an incomplete picture of the available evidence and potentially bias the results of the systematic review.

Examples:

In a case-control study of smoking and chronic lung disease, if the control group is selected from a hospital population, the association between smoking and lung disease might be weaker than if the control group was selected from the community. This is because smoking causes many diseases that lead to hospitalization, so hospital controls might have a higher prevalence of smoking than the general population. In vaccine studies, a type of selection bias called the "healthy vaccine effect" has to be considered. People who get vaccines tend to be healthier and follow more health-related guidelines than those who aren't vaccinated, which can bias the results of studies comparing vaccinated and unvaccinated groups.

Performance Bias

Performance bias occurs when there are systematic differences between groups in the care provided or exposure to factors other than the intervention being studied. This can happen if participants or researchers are aware of the treatment assignment, leading to differences in behavior or treatment.

Example: In a clinical trial comparing two treatments for depression, if the participants in the treatment group receive more attention and support from the researchers than those in the control group, this could influence the outcome of the study.

Detection Bias

Detection bias occurs when there are systematic differences between groups in how outcomes are measured or assessed. This can happen if the outcome assessors are aware of the treatment assignment, leading to biased assessments.

Example: In a study investigating the effectiveness of a new pain medication, if the doctors assessing pain levels know which patients received the medication and which received a placebo, they might unintentionally overestimate pain relief in the medication group.

Attrition Bias

Attrition bias occurs when there are systematic differences between groups in the loss of participants during the study. This can happen if participants who experience side effects or don't respond to the treatment are more likely to drop out, leading to an overestimation of the treatment effect.

Example: In a clinical trial of a weight loss program, if participants who are not losing weight are more likely to drop out, the results might show the program is more effective than it actually is.

Reporting Bias

Reporting bias occurs when there are systematic differences between reported and unreported findings. This can happen due to selective reporting of outcomes, where only statistically significant or favorable results are published.

Different types of reporting bias can occur, including:
Publication bias: The publication or non-publication of research findings, depending on the nature and direction of the results.
Time-lag bias: The rapid or delayed publication of research findings, depending on the nature and direction of the results.
Language bias: The publication of research findings in a particular language, depending on the nature and direction of the results.
Citation bias: The citation or non-citation of research findings, depending on the nature and direction of the results.
Multiple (duplicate) publication bias: The multiple or singular publication of research findings, depending on the nature and direction of the results.
Location bias: The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of the results.
Selective (non-) reporting bias: The selective reporting of some outcomes or analyses, but not others, depending on the nature and direction of the results.

Example: A pharmaceutical company might conduct a clinical trial of a new drug and only publish the results of the analyses that show a positive effect, while not reporting the analyses that show no effect or negative effects.

Tools and Techniques for Assessing Risk of Bias

Several tools and techniques are available to assess the risk of bias in primary studies. These tools provide a structured approach to assessing the risk of bias in primary studies, helping reviewers to identify potential sources of bias and make informed judgments about the quality of the evidence. Some of the most commonly used ones include:

Cochrane Risk of Bias tool (RoB 2): This tool is used to assess the risk of bias in randomized controlled trials. It focuses on different aspects of trial design, conduct, and reporting, such as randomization, blinding, and outcome assessment.
ROBINS-I tool: This tool is used to assess the risk of bias in non-randomized studies of interventions. It considers various factors that can introduce bias, such as confounding, selection bias, and measurement bias.
Other tools: Several other tools are available for specific study designs or research areas, such as the QUADAS-2 tool for diagnostic accuracy studies and the Newcastle-Ottawa Scale for non-randomized studies.

Strategies for Dealing with Bias in Systematic Reviews

Once the risk of bias in primary studies has been assessed, reviewers can use various strategies to deal with it in their systematic reviews. It's important to remember that no single strategy can completely eliminate the impact of bias, and reviewers should use a combination of approaches to minimize its influence. Some strategies include:

Sensitivity analysis: This involves analyzing the results of the review with and without the studies that have a high risk of bias to see how much they influence the overall findings.
Subgroup analysis: This involves analyzing the results separately for different subgroups of studies based on their risk of bias.
Excluding studies with high risk of bias: In some cases, reviewers might choose to exclude studies with a very high risk of bias from the review to minimize the impact of biased results.
Exploring potential sources of bias: Reviewers can also explore the potential impact of bias by considering the direction and magnitude of the bias and how it might affect the review's conclusions.
Dual review: Using at least two people working independently to apply the risk-of-bias tool to each result in each included study can help reduce random error in applying inclusion and exclusion criteria.
Identifying publication bias: Funnel plots and statistical tests can be used to identify publication bias in systematic reviews, especially in meta-analyses with ten or more studies.

By carefully considering and addressing the risk of bias in primary studies, reviewers can increase the reliability and validity of their systematic reviews.

Illustrative Examples of Bias Impacting Systematic Reviews

Here are some examples of how bias in primary studies has impacted the results of systematic reviews:

Publication bias in antidepressant trials: A systematic review of antidepressant trials by Turner et al. found that studies with positive results were more likely to be published than studies with negative results, leading to an overestimation of the effectiveness of antidepressants. This highlights the importance of considering publication bias when interpreting the results of systematic reviews, especially in areas where there may be strong financial incentives to publish positive findings.
Selection bias in studies of surgical interventions: A systematic review of surgical interventions for a specific condition (e.g., knee arthroplasty) found that studies with less rigorous selection criteria tended to show larger treatment effects, highlighting the importance of careful participant selection. This example demonstrates how selection bias can lead to an overestimation of treatment effects and the need for reviewers to critically evaluate the selection criteria used in primary studies.
Reporting bias in studies of fibromyalgia: A systematic review of fibromyalgia interventions found that many studies only reported outcomes that showed improvement, potentially masking the true effectiveness of the interventions. This example illustrates how selective outcome reporting can bias the results of systematic reviews and the importance of considering all relevant outcomes when evaluating the effectiveness of interventions.

Conclusion

Understanding and addressing bias in primary studies is essential for conducting high-quality systematic reviews. Biased primary studies can undermine the validity and reliability of systematic reviews, potentially leading to incorrect conclusions and recommendations. Different types of bias can have different effects on study results, and reviewers need to be aware of the specific biases that are relevant to their research question.

By carefully assessing the risk of bias and using appropriate strategies to deal with it, reviewers can ensure that their reviews provide accurate and reliable summaries of the evidence. This, in turn, can help inform healthcare decisions and improve patient outcomes. It's also important to remember that the process of conducting a systematic review may introduce bias, and factors like incomplete searches or lack of transparency in study selection can bias the results of the review. Additionally, the way authors present their conclusions, whether qualitatively or quantitatively, can influence how the findings are interpreted and used.