**Introduction:**

All quantitative clinical research studies and clinical trials, from observational epidemiology studies to large phase III or IV global trials of new drugs, produce data. These data are produced to answer the research question, and assess whether the intervention is effective. The data are meaningless unless they are subjected to statistical analyses. Statistics therefore implies the collection, arranging, analysing, interpreting and reporting of data generated from research studies. Poor application of statistical tools in analysing data generated from clinical trials could result in either exaggerating the effect of a drug when it is not effective, or disapproving a potent drug when it is indeed effective and safe for public consumption. In a clinical trial, the majority of the problems resulting in defective statistical analysis and interpretation have been attributed to poor study design, poor adherence to inclusion and exclusion criteria, inadequate subject recruitment, missing data, bias, wrong application of statistical methods and confounding factors. A study of the GCP guidelines shows that the document appropriately appreciates the critical importance of data analysis in clinical trial. In statistics, the quality, generation and application of interpretations derivable from statistical analysis is as good as the data generated. Again, no good statistical analyses will be able to correct data generated using a poorly designed study. Premised on the importance attached to statistical analysis in clinical research, there is a constant need to emphasise to beginners and refresh professionals in this field the need to apply the established statistical considerations to achieve the desired results.

**Some terminologies used in statistics in clinical research:**

**Hypothesis:** In the simplest terms, a hypothesis is the question under study, and is usually explained both in terms of a hypothesis, and a null hypothesis (the null hypothesis suggests there will be no difference between the things you are studying). In order to understand the meaning of hypothesis and null hypothesis, let us take a look at this case study.

*John is a Researcher in diabetic pharmacology and he has some good experience in the field of ethnomedicine. His research work and experience has been focused on the metabolism, pathophysiology and pathology of diabetes. One day, a client came to him complaining of excessive urination, frequent thirst and excessive sweating. He instructed the client to have his blood sample tested for glucose. The patient went as instructed, did the investigation and came back with the result. On presenting the result to John, he noticed that the plasma glucose was high and he decided to give the patient one of his remedies (Z) which he had not tried previously on any patient and at the same time had not evaluated its efficacy and safety on diabetics. He instructed the patient to take the remedy for four days and after which he should come back for a repeat of the test. On the fifth day, the patient came back as instructed and the test was repeated. The result showed that the plasma glucose of the patient which was very high before the commencement of the herbal remedy (Fasting glucose, 350mg/dl, reference ≤ 126mg/dl) had reduced drastically to 110mg/dl. John, seeing this result was excited that his remedy was helpful in reducing the plasma glucose of his client, but he could not conclude scientifically whether this reduction was solely due to the influence of the remedy which he gave the patient or due to other factors which the patient did not disclose. Premised on his interest in research and evaluation of herbal remedies, he thought it important to evaluate the efficacy and safety of the herbal remedy for the treatment of diabetes.*

To elucidate his findings using scientific methods, he set out two hypotheses. These were:

**Null Hypothesis** (Ho): Herbal remedy Z does not control diabetes.**Alternative Hypothesis** (H1): Herbal remedy Z controls diabetes

From this case study, we can simply define hypothesis in clinical research as an unconfirmed, untested or unevaluated scientific explanation. This statement can only be accepted to be scientifically true, valid and generally applicable, after it has been subjected to robust scientific evaluation using sound and acceptable statistical analysis. In clinical trials, different hypotheses can be proposed and investigated using specific clinical trial design. . These are:

- The investigational drug (new drug) is better than or superior to the standard, control drug or placebo (superiority trial design)
- New drugs perform as good as the standard treatment (equivalence trial design )
- New drug is less effective than the standard treatment (inferiority trial design)

Premised on these hypotheses, an appropriate protocol (research proposal) is developed and used as a guiding template to conduct the study.

**P- value**

In statistics, two different hypotheses are tested using data. These are the Null and Alternative hypotheses. P- value (calculated probability) is generally used to test whether the null hypothesis should be accepted or rejected. In statistics, when the calculated probability p is less than 0.05 (p< 0.05) it is believed, by the usual convention, that there is a significant difference between the two observations as long as the other factors that could affect the data generated have been adequately controlled.

When p values is less than 0.05, it means that the null hypothesis is the most probable explanation for your data, and when it is more than (p>0.05) it means that the null hypothesis is unlikely, and should be rejected, so you accept that the alternative as the most probable explanation for your data. However, it’s important to note that you have not **proved** either hypothesis, but simply indicated that you’re accepting the most probable explanation for your finding.

In clinical research, when the p value is equal to or less than 0.05 (p= 0.05), Statisticians and Clinical Research Professionals argue that it is not a strong point to accept the null hypothesis at such level. They also argue that p values alone should not be used to interpret significance, but should also include the confidence interval (CI) which indicates the size of effect being observed).

It should be noted that several factors affect the value of p obtained in clinical trial. One very important factor is the number of subjects recruited for the study. When too few subjects are recruited, this could falsely generate data that may be interpreted as though the investigational drug is not active or vice versa. On the other hand, when too many subjects are recruited it could also generate data that could falsely exaggerate the statistical significance of the safety and efficacy data on the investigational product. To avoid this, clinical research professionals do their best to recruit enough patients to result in generating acceptable data using an appropriate sample size calculator. The probability that a study design will reject the null hypothesis when it is in fact false is referred to as the power of the study design.

**Randomisation**

In practical terms, during clinical trials, patients are randomized (a method of choosing selected research subjects by chance and NOT by choice. This could be achieved by simply tossing a coin. The tail goes to one group while the head goes to another group, meaning that all participants have an equal probability of being assigned to either group). The difference here is that all the subjects recruited for the study have the disease of interest which the new drug is assumed to treat. After randomization, the subjects are divided into groups (there could be two, three, four or more groups depending on the type of research and the hypothesis being tested). While the patients in one group receive the investigational product, the control group may receive an inactive drug which is called a placebo, or the usual standard care for their indication, depending on the study design (please note that it is the responsibility of ethics committee to safeguard the rights and safety of all participants). It is important to note that where there are existing, proven treatments for a condition, the control group may be assigned the existing treatment, and the intervention group the new, presumably better, safer, or less costly, treatment. This is because it would not be ethical to assign participants to an inactive placebo when a proven treatment already exists. The investigational product and control product are administered to the cases and controls respectively.

Data are collected and subjected to statistical analysis. The difference in the data generated between the two groups is tested using statistical analyses. Depending on the values obtained, the investigational product can be said to be more effective than the control, or to have no benefits.

**Blinding**

Where possible to avoid bias in outcomes, described more fully below, the study team and participants are ‘blinded’ to allocation. Blinding is conducted to avoid behaving differently due to knowing which group a participant is allocated to. So, for example, a treating doctor may overestimate the good effect of a drug intervention if she knows that the participant is receiving the new, ‘better’, drug. She may (usually unconsciously) pay more attention to this group of participants. Participants who know they are receiving the new drug, may make other healthy behavioural choices, or underreport issues, because of their belief in the power of the new, and so ‘better’, drug. In triple blinded studies neither the study staff allocating participants to the intervention or placebo, nor the participants or the study staff taking outcome measures know which drug or intervention the participants are receiving. Note that it is not always possible to blind participants or study staff. For example where comparing two types of treatments that look very different (injections versus capsules), it is not possible to blind participants or study staff administering the treatment to the allocation.

**Bias: **

This is any error associated with the study design, conduct, analyses and publication that results in exaggerating the effects of the investigational product. For example, when inclusion and exclusion criteria for choosing subjects in a clinical trial are not properly defined, this can introduce bias because those factors can affect the result: for example, if you included babies in a drug study, this could influence the results because a drug may affect babies differently to adults. When investigators publish only ‘positive findings’ emanating from a clinical trial while neglecting the ‘negative findings’, this results in publication bias. Bias, no matter the type, introduces distortion or error into a clinical trial. A good clinical trial design should be devoid of bias. All other extraneous factors other the investigational intervention of interest which could distort the result of the study should be analysed and preventive actions taken at the design stage of the study.

**Dependent and Independent Variables**

In statistics, there are usually two main variables. These are called dependent and independent variables. By definition, the dependent variable is a manifestation or outcome achieved by manipulating the independent variable (predictor). The independent variable is a characteristic that when changed or modified influences the outcome observed (dependent variable). In summary, dependent variable is the outcome of an event, while the independent variable predicts the outcome of an event (so they are predictors). Put another way, the dependent variable ‘depends’ on the value of the independent variable. Importantly, this is a characteristic of the design of the trial, and not an innate property of variable. That is, a dependent variable in one study may be an independent variable in another study and vice versa.

Example 1: Identify the types of variables in this research question:

Does intake of antimalarial influence the level of parasitaemia in a blood sample?

- Independent variable: intake of antimalarial
- Dependent variable: the level of parasitaemia

Example 2: what is the effect of age on plasma cholesterol?

- Independent variable: age
- Dependent variable: plasma cholesterol

Example 3: Does adjusting dietary and life modifications have an impact on the aetiology and pathogenesis of cardiovascular disease?

- Independent variable: adjusting dietary and life modification
- Dependent variable: aetiology and pathogenesis of cardiovascular disease.

**Confounding variables**

These are variables or factors that are usually not measured in a study, but which may be accounting for the effect seen in the study. For example, a study may show that a drug’s efficacy is influenced by socioeconomic status. However, dietary status may be strongly associated with socioeconomic status, but being unmeasured in the study, and so it becomes impossible to determine the impact of dietary status on drug efficacy although it appears as though it is socioeconomic status that is impacting on the outcome. In short, the relative contribution of diet and socioeconomic status cannot be unravelled. Confounding is usually dealt with in study designs by making sure to measure potential confounding variables so that their effect can be individually determined in statistical analyses. Revealing confounding variables is most important when trying to understand causality, or the way in which an intervention is impacting on the measured outcome.

To explain this further, in an epidemiological study designed to examine cardiovascular risk factor in smokers and non-smokers, a thorough inclusion and exclusion criteria must be set in order to eliminate or reduce the effect of some confounders on the study outcome. For example, if some smokers are also coffee drinkers, both can have influence on the outcome of the study. If the study does not also consider the contributory factors such as age, obesity and hypertension during the subject selection, all these can act as confounders thus influencing the outcome of the research since all can also influence the exaggeration of cardiovascular risk factors in different proportions at one time or the other.

**Parametric and non - parametric statistics:**

In statistics, two main types of statistical methods are generally applied to analyse the data generated from the population. These are parametric and non-parametric testing. Understanding the type of data generated usually helps in identifying the type of statistical method to be adopted for data analyses. In parametric testing, it is generally assumed that the data generated from the population has a normal distribution. When this is assumed, it is required that parametric statistical testing methods are applied in analysing such data. Most common statistical methods are of the parametric type. These include calculating simple mean, standard deviations, coefficient of variation, t-test, F –test (ANOVA), correlation and regression analyses. Parametric testing is easy to calculate and they have simpler formulae. Parametric analyses are based on assumption that the data collected has a normal distribution. This implies that the statistics is correct as far as the data generated has a normal distribution (Gaussian, not skewed); and when it is deviated from the normal distribution curve, it becomes a great issue and information gathered could be prone to errors. Most importantly, in skewed distributions the mean (arithmetic average) becomes a poor estimate of the overall central tendency of measure and parametric tests usually rely on the mean and its associated variance, and so this means that parametric tests are likely to give untrustworthy or meaningless results.

Unlike parametric statistics, non -parametric statistics are assumed not to have a normal distribution curve. For examples, if a questionnaire is designed to gather information where respondents can choose options such as ‘like’, ‘dislike’, ‘agree’ and ‘disagree’, these types of responses cannot be quantified in meaningful figures. This type of response is usually analysed using non- parametric statistics. Examples of non- parametric testing statistical methods are: Mann-Whitney test, Wilcoxon Signed-Rank test, the Kruskal-Wallis test and the Friedman test (these methods are not to be discussed here). Understanding the type of statistical method required for analysing a particular data goes a long way in gathering the required information, obtaining an appropriate data, applying the right statistical methods and arriving at the right conclusion and deduction. It is therefore required that clinical research professionals decide and choose at the planning stage of the study the type of statistical technique which should be applied to enable them arrive at a good conclusion.

**Types of Error:**

In statistics, some interpretational errors could occur. These are called type I and type 2 errors. Assume that we commence testing the hypothesis stated above (John’s observation) using statistical measurement; and at the end and based on the statistical information gathered, we reject the null hypothesis (Herbal remedy Z does not controls diabetes) when indeed it should be true, this type of error is called type 1 error. On the other hand, when we fail to reject the null, and so accept the alternative hypothesis (the herbal remedy Z controls diabetes) when indeed it is false, this is called type 2 error. In clinical research terms, using a poorly designed study and wrong application of statistical method to reject a drug when it is normally effective and safe for public consumption (type 1 error) means depriving the people of an effective and safe medicine. Conversely, approving a drug that should not be approved (type 2 error) means presenting a poor, ineffective and unsafe drug to the public which could result in serious adverse events.

**Basic statistical issues in Clinical research**

- Design issues: A good clinical research study should be well designed with statistical analyses in mine. Planning is the most important phase of any clinical research. Good design means selecting the right patients (P), using an appropriate intervention (I), using the right comparator (C) which is a fair comparison to the investigational product, and measuring an appropriate outcomes (O) or endpoint to determine when the desired result is achieved. So, a good clinical research should be ‘PICO’ designed. Wrong designs equals wrong research, wrong interpretation and conclusion. No good statistical analysis can be able to correct a wrongly designed research study.
- Good patient selection (define inclusion criteria, exclusion criteria and patient population. Use the right sample size. Avoid missing data by designing an appropriate tool to follow up patients.)
- Define the measurable outcome and look out for it. Is the primary outcome (e.g. total cure from cancer for a cancer study) or a secondary outcome (e.g. reduction in tumour size also for a cancer study)?
- State the null hypothesis clearly, and avoid type1 and type 2 errors.
- Ensure randomization and blinding.
- Use the right and adequate control group. The control should be identical or as similar as possible to the subjects being investigated, so that at the end of the study, the only difference being experienced could only be attributed to the interventional product received. This is usually achieved by random allocation to either the control or intervention groups.
- Avoid confounding factors (factors that will affect the outcomes being measured.).
- Apply the right statistical calculations (parametric statistics for parametric data, and non-parametric statistics for non-parametric data).

**Conclusion:**

In this brief discussion, we have discussed the meaning of statistics, its importance and its application in clinical research. It is an indispensible tool in research including clinical research. Many reports from research studies have lacked wide application and interpretation due to deficiencies observed in the statistical measurements and analysis that generated such reports. In the same vein, many drugs have been denied approval either as a result of exaggeration or underestimation of effects observed from a wrongly designed, poorly implemented and abnormal application of statistical methods during the clinical phase of drug development. It behoves on all clinical research professionals to appreciate the importance of data management in clinical research and make necessary provisions for data management both from the beginning of the research to the documentation of report.

References:

1. Altman DG (1998). Statistical Reviewing for Medical Journals. Statist Med 17:2661-74.

2. Onyeaghala Augustine (2013). Importance of Biostatistics in Clinical Research. Seminar presentation, 25th August, 2013 (unpublished).

3 Scott R. Evans (2010).Common Statistical Concerns in Clinical Trials. J Exp Stroke Transl Med. Feb 9; 3(1)