Skip to Main Content

Research Methodologies

A guide on the different types of research methods, and how to determine which one to use in your own research.

Research Design

According to Jenkins-Smith, et al. (2017), a research design is the set of steps you take to collect and analyze your research data.  In other words, it is the general plan to answer your research topic or question.  You can also think of it as a combination of your research methodology and your research method.  Your research design should include the following: 

  • A clear research question
  • Theoretical frameworks you will use to analyze your data
  • Key concepts
  • Your hypothesis/hypotheses
  • Independent and dependent variables (if applicable)
  • Strengths and weaknesses of your chosen design

There are two types of research designs:

  • Experimental design: This design is like a standard science lab experiment because the researcher controls as many variables as they can and assigns research subjects to groups.  The researcher manipulates the experimental treatment and gives it to one group.  The other group receives the unmanipulated treatment (or not treatment) and the researcher examines affect of the treatment in each group (dependent variable).  This design can have more than two groups depending on your study requirements.


  • Observational design: This is when the researcher has no control over the independent variable and which research participants get exposed to it.  Depending on your research topic, this is the only design you can use.  This is a more natural approach to a study because you are not controlling the experimental treatment.  You are allowing the variable to occur on its own without your interference.  Weather experiments are a great example of observational design because the researcher has no control over the weather and how it changes.

When considering your research design, you will also need to consider your study's validity and any potential threats to its validity.  There are two types of validity: external and internal validity.  Each type demonstrates a degree of accuracy and thoughtfulness in a study and they contribute to a study's reliability.  Information about external and internal validity is included below.

External Validity

External validity is the degree to which you can generalize the findings of your research study.  It is determining whether or not the findings are applicable to other settings (Jenkins-Smith, 2017).  In many cases, the external validity of a study is strongly linked to the sample population.  For example, if you studied a group of twenty-five year old male Americans, you could potentially generalize your findings to all twenty-five year old American males.  External validity is also the ability for someone else to replicate your study and achieve the same results (Jenkins-Smith, 2017).  If someone replicates your exact study and gets different results, then your study may have weak external validity.

Questions to ask when assessing external validity:

  • Do my conclusions apply to other studies?
  • If someone were to replicate my study, would they get the same results?
  • Are my findings generalizable to a certain population?

Internal Validity

Internal validity is when a researcher can conclude a causal relationship between their independent variable and their dependent variable.  It is a way to verify the study's findings because it draws a relationship between the variables (Jenkins-Smith, 2017).  In other words, it is the actual factors that result in the study's outcome (Singh, 2007).  According to Singh (2007), internal validity can be placed into 4 subcategories:

  • Face validity: This confirms the fact that the measure accurately reflects the research question.
  • Content validity: This assesses the measurement technique's compatibility with other literature on the topic.  It determines how well the tool used to gather data measures the item or concept that the researcher is interested in.
  • Criterion validity: This demonstrates the accuracy of a study by comparing it to a similar study.
  • Construct validity: This measures the appropriateness of the conclusions drawn from a study.

Threats to Validity

According to Jenkins-Smith (2017), there are several threats that may impact the internal and external validity of a study:

Threats to External Validity

  • Interaction with testing: Any testing done before the actual experiment may decrease participants' sensitivity to the actual treatment.
  • Sample misrepresentation: A population sample that is unrepresentative of the entire population.
  • Selection bias: Researchers may have bias towards selecting certain subjects to participate in the study who may be more or less sensitive to the experimental treatment.
  • Environment: If the study was conducted in a lab setting, the findings may not be able to transfer to a more natural setting.

Threats to Internal Validity

  • Unplanned events that occur during the experiment that effect the results.
  • Changes to the participants during the experiment, such as fatigue, aging, etc.
  • Selection bias: When research subjects are not selected randomly.
  • If participants drop out of the study without completing it.
  • Changing the way the data is collected or measured during the study.