What type of research is used in psychology?

Various research methods in psychology are used to test different theories and obtain results.

Psychological research follows either a quantitative or qualitative research method. The most appropriate research method is chosen depending on the research objective.

Research methods and statistics in psychology

Let us look at types of research methods in psychology and some examples. The types of research in psychology can be divided into two main categories: 1) quantitative and 2) qualitative.

Quantitative research

Quantitative research uses mathematical modelling and statistical estimation or inference to describe variables, predict findings, and explore potential correlations and causality between variables.

Imagine a researcher would like to investigate the effects of learning through StudySmarter. There are two groups: group A is given access to StudySmarter as their revision medium, and group B continues with traditional textbook revision. After a month, the academic performance of the participants is measured and the statistics are compared.

What type of research is used in psychology?
Statistic analysis, Pixabay

Qualitative research

Qualitative research uses non-numerical data such as text, audio, and video, which investigates and attempts to understand or interpret various phenomena such as societal or individual perceptions and actions.

Compared to quantitative research, the aim is to focus on the human condition and the language they use, rather than statistical differences. Interviews and focus groups are key tools in qualitative research.

What type of research is used in psychology?
Focus groups as the main tools in qualitative research, Optinmonster

Types of research methods in psychology

There are distinct approaches employed in psychological research under each of the two main categories. While this is not an extensive list, it highlights five of the most common strategies used in psychological research. These are the experimental methods, observational techniques, self-report techniques, correlational studies, and case studies.

Experimental methods

The experimental method is a procedure carried out to support or reject a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular variable is manipulated. Experimental studies are classified as quantitative research.

There are mainly four types of experiments in psychology:

  1. Laboratory experiments
  2. Field experiments
  3. Natural experiments
  4. Quasi-experiments

Each type of experiment has strengths and limitations.

Observational techniques

Observational techniques are used when a researcher observes how people behave and act in order to learn more about their ideas, actions, and beliefs. Observation studies are mostly categorised as qualitative in nature. However, they may also be quantitative or both (mixed-methods).

The two main observation techniques are:

  • Participant observation.

  • Non-participant observation.

Observations can also be overt and covert, naturalistic and controlled.

Self-report techniques

Self-report techniques refer to data collection approaches in which participants report information about themselves without interference from the experimenter. Ultimately, such methods require respondents to give responses to a set of pre-set questions. Thus, self-report techniques can provide researchers with both quantitative and qualitative data, depending on the set-up of questions.

Self-report techniques can include

Self-report techniques can include

  • Questionnaires.

  • Interviews.

  • Psychometric testing

Content analysis is a technique for analysing qualitative data. The researcher will code their data to look for common patterns and themes. They can then analyse and draw conclusions from the patterns and themes they find. Content analysis converts non-numerical data into different categories to make it easy to analyse. This technique is applied to qualitative data such as interview transcripts, videotapes, and audio recordings. The coding standard can vary largely depending on the data used.

Correlational Studies measure the strength and direction of a statistical relationship between two co-variables. Correlational studies are quantitative in nature, and the findings are displayed in scattergrams. There are two types of correlations that the researcher may observe. These are:

Positive correlations (where one variable increases as the other variable increases)

How do umbrella sales increase as the rainy weather increases?

Negative correlations (where one variable increases as the other decreases)

How do hot chocolate sales increase as the temperature decreases?

Case studies

Case studies belong to a qualitative research methodology. Case studies entail an in-depth investigation of persons, groups, communities, or events. They frequently employ a multi-methodological approach that includes participant interviews as well as unobtrusive observations. Case studies in psychology are conducted on targeted participants. A psychology case study typically gathers critical and influential biographical moments from a patient's past, and salient details in the individual's daily life that may drive the development of particular behaviours or thinking.

A famous psychological case study is H.M. From his case study we learned the effect of hippocampal damage on memory.

Research Methods in Psychology - Key takeaways

  • Research methods in psychology can be divided into two main categories, namely qualitative and qualitative research.
  • Quantitative research employs numerical data.

  • Qualitative research employs non-numerical data, the focus is on language.

  • Experimental methods, observational techniques, self-report techniques, correlational studies and case studies are the five of the most common methodologies employed in psychological research.

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

By Dr. Saul McLeod, updated 2022

The aim of the study is a statement of what the researcher intents to investigate.

The hypothesis of the study is an idea, derived from psychological theory which contains a prediction which can be verified or disproved by some kind of investigation, usually an experiment.

A directional hypothesis indicates a direction in the prediction (one-tailed) e.g. ‘students with pets perform better than students without pets’.

A non-directional hypothesis does not indicate a direction in the prediction (two-tailed) e.g. ‘owning pets will affect students’ exam performances’.

Further Information

A sample is the participants you select from a target population (the group you are interested in) to make generalisations about.

Representative means the extent to which a sample mirrors a researcher's target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

A Volunteer sample is where participants pick themselves through newspaper adverts, noticeboards or online.

Opportunity sampling, also known as convenience sampling, uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.

Random sampling is when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.

Systematic sampling is when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.

Stratified sampling is when you identify the subgroups and select participants in proportion with their occurrences.

Snowball sampling is when researchers find a few participants, and then ask them to find participants themselves and so on.

In quota sampling, researchers will be told to ensure the sample fits with certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Further Information

Independent variable (IV) – the variable the experimenter manipulates, aassumed to have a direct effect on the DV.

Dependent variable (DV) – the variable the experimenter measures after making changes to the IV.

We must use operationalisation to ensure that variables are in a form that can be easily tested e.g. Educational attainment → GCSE grade in maths.

Extraneous variables are all variables, which are not the independent variable, but could affect the results of the experiment.There are two types: Situational variables (controlled through standardisation) and Participant variables (controlled through randomisation).

Further Information

In an independent measures design (between-groups design), a group of participants are recruited and divided into 2. The first group does the experimental task with the IV set for condition 1 and the second group does the experimental task with the IV set for condition 2. The DV is measured for each group and results are compared.

In a repeated measures design (within groups), a group of participants are recruited, and the group does the experimental task with the IV set for condition 1 and then the same for condition 2. The DV is measured for each group and results are compared.

In a matched pairs design, a group of participants are recruited. We find out what sorts of people we have in the group and recruit another group that matches them one for one. The experiment is then treated like an independent measures design and the results are compared.

Further Information

This type of experiment is conducted in a well-controlled environment – not necessarily a laboratory – and therefore accurate and objective measurements are possible.

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances and using a standardized procedure.

Further Information

These are conducted in the everyday (i.e. natural) environment of the participants but the situations are still artificially set up.

The experimenter still manipulates the IV, but in a real-life setting (so cannot really control extraneous variables).

Further Information

Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway.

Participants are not randomly allocated and the natural event may only occur rarely.

Further Information

Case studies are in-depth investigations of a single person, group, event or community.

Case studies are widely used in psychology and amongst the best-known ones carried out were by Sigmund Freud. He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity.

Further Information

Correlation means association - more precisely it is a measure of the extent to which two variables are related.

If an increase in one variable tends to be associated with an increase in the other then this is known as a positive correlation.

If an increase in one variable tends to be associated with a decrease in the other then this is known as a negative correlation.

A zero correlation occurs when there is no relationship between variables.

Further Information

Unstructured (informal) interviews are like a casual conversation. There are no set questions and the participant is given the opportunity to raise whatever topics he/she feels are relevant and ask them in their own way. In this kind of interview much qualitative data is likely to be collected.

Structured (formal) interviews are like a job interview. There is a fixed, predetermined set of questions that are put to every participant in the same order and in the same way. The interviewer stays within their role and maintains social distance from the interviewee.

Further Information

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone or post.

The questions asked can be open ended, allowing flexibility in the respondent's answers, or they can be more tightly structured requiring short answers or a choice of answers from given alternatives.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent, or causing offence.

Further Information

Covert observation is where the researcher doesn’t tell the participants that they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular method of observation.

Overt observation is where a researcher tells the participants that they are being observed and what they are being observed for.

Controlled: behavior is observed under controlled laboratory conditions (e.g. Bandura's Bobo doll study).

Natural: Here spontaneous behavior is recorded in a natural setting.

Participant: Here the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.

Non-participant (aka "fly on the wall): The researcher does not have direct contact with the people being observed.

Further Information

Pilot study is a small scale preliminary study conducted in order to evaluate feasibility of the key steps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low. The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

In cross-sectional research, a researcher compares multiple segments of the population at the same time

Sometimes we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies, the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period of time.

Triangulation means using more than one research method to improve the validity of the study.

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

Test-retest reliability – Assessing the same person on two different occasions which shows the extent to which the test produces the same answers.

Inter-observer reliability – the extent to which there is agreement between two or more observers.

Further Information

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the validity of the conclusions drawn as they’re based on a wider range.

Weaknesses: Research designs in studies can vary so they are not truly comparable.

A researcher submits an article to a journal. The choice of journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

A The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.

ualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.

Primary data is first hand data collected for the purpose of the investigation.

Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Further Information

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect in genuine and represents what is actually out there in the world.

Concurrent validity – the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.

Face validity – does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.

Ecological validity – the extent to which findings from a research study can be generalised to other settings / real life.

Temporal validity – the extent to which findings from a research study can be generalised to other historical times.

Further Information

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

Test-retest reliability – Assessing the same person on two different occasions which shows the extent to which the test produces the same answers.

Inter-observer reliability – the extent to which there is agreement between two or more observers.

Further Information

Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.

Paradigm shift – The result of scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.

Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.

Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.

Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.

Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Further Information

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happened when a stringent significance level is used, an error of pessimism).

Further Information

Informed consent is when participants are able to make an informed judgement about whether to take part. It causes them to guess the aims of the study and change their behavior. To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.

Deception should only be used when it approved by an ethics committee as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.

All participants should be informed at the beginning that they have the Right to Withdraw if they ever feel distressed or uncomfortable. It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.

Participants should all have Protection from harm. The researcher should avoid risks greater than experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.

Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Further Information

How to reference this article:

McLeod, S. A. (2017). Research methods. Simply Psychology. www.simplypsychology.org/research-methods.html

How to reference this article:

McLeod, S. A. (2017). Psychology research methods. Simply Psychology. www.simplypsychology.org/research-methods.html

Home | About Us | Privacy Policy | Advertise | Contact Us

Simply Psychology's content is for informational and educational purposes only. Our website is not intended to be a substitute for professional medical advice, diagnosis, or treatment.

© Simply Scholar Ltd - All rights reserved

What type of research is used in psychology?
report this ad