Which type of reliability are you estimating when you administer the same test on two different occasions?

When we call someone or something reliable, we mean that they are consistent and dependable. Reliability is also an important component of a good psychological test. After all, a test would not be very valuable if it was inconsistent and produced different results every time. How do psychologists define reliability? What influence does it have on psychological testing?

Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait (such as introversion), then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to calculate reliability exactly, but it can be estimated in a number of different ways.

Test-retest reliability is a measure of the consistency of a psychological test or assessment. This kind of reliability is used to determine the consistency of a test across time. Test-retest reliability is best used for things that are stable over time, such as intelligence.

Test-retest reliability is measured by administering a test twice at two different points in time. This type of reliability assumes that there will be no change in the quality or construct being measured. In most cases, reliability will be higher when little time has passed between tests.

The test-retest method is just one of the ways that can be used to determine the reliability of a measurement. Other techniques that can be used include inter-rater reliability, internal consistency, and parallel-forms reliability.

It is important to note that test-retest reliability only refers to the consistency of a test, not necessarily the validity of the results.

This type of reliability is assessed by having two or more independent judges score the test. The scores are then compared to determine the consistency of the raters estimates.

One way to test inter-rater reliability is to have each rater assign each test item a score. For example, each rater might score items on a scale from 1 to 10. Next, you would calculate the correlation between the two ratings to determine the level of inter-rater reliability.

Another means of testing inter-rater reliability is to have raters determine which category each observation falls into and then calculate the percentage of agreement between the raters. So, if the raters agree 8 out of 10 times, the test has an 80% inter-rater reliability rate.

Parallel-forms reliability is gauged by comparing two different tests that were created using the same content. This is accomplished by creating a large pool of test items that measure the same quality and then randomly dividing the items into two separate tests. The two tests should then be administered to the same subjects at the same time.

This form of reliability is used to judge the consistency of results across items on the same test. Essentially, you are comparing test items that measure the same construct to determine the tests internal consistency.

When you see a question that seems very similar to another test question, it may indicate that the two questions are being used to gauge reliability.

Because the two questions are similar and designed to measure the same thing, the test taker should answer both questions the same, which would indicate that the test has internal consistency.

There are a number of different factors that can have an influence on the reliability of a measure. First and perhaps most obviously, it is important that the thing that is being measured be fairly stable and consistent. If the measured variable is something that changes regularly, the results of the test will not be consistent.

Aspects of the testing situation can also have an effect on reliability. For example, if the test is administered in a room that is extremely hot, respondents might be distracted and unable to complete the test to the best of their ability. This can have an influence on the reliability of the measure.

Other things like fatigue, stress, sickness, motivation, poor instructions and environmental distractions can also hurt reliability.

It is important to note that just because a test has reliability it does not mean that it has validity. Validity refers to whether or not a test really measures what it claims to measure.

Think of reliability as a measure of precision and validity as a measure of accuracy. In some cases, a test might be reliable, but not valid.

For example, imagine that job applicants are taking a test to determine if they possess a particular personality trait. ​While the test might produce consistent results, it might not actually be measuring the trait that it purports to measure.

By Indeed Editorial Team

Published June 15, 2021

Researchers are vital employees to various industries, as they help companies and organizations make advancements and appeal to customers. To conduct accurate research, these employees often use assessments to determine if their research methods are getting reliable results. You may be interested in learning about how to test for reliability to help you succeed in your role as a researcher. In this article, we define the four types of research reliability assessments, discuss how to test for reliability in research and examine tips to help you get the best results.

Related: Types of Research Methods

What is research reliability?

Research reliability refers to whether research methods can reproduce the same results multiple times. If your research methods can produce consistent results, then the methods are likely reliable and not influenced by external factors. This valuable information can help you determine if your research methods are accurately gathering data you can use to support studies, reviews and experiments in your field.

How do you determine reliability in research?

To determine if your research methods are producing reliable results, you must perform the same task multiple times or in multiple ways. Typically, this involves changing some aspect of the research assessment while maintaining control of the research. For example, this could mean using the same test on different groups of people or using different tests on the same group of people. Both methods maintain control by keeping one element exactly the same and changing other elements to ensure other factors don't influence the research results.

Related: Research Skills: Definition and Examples

Jobs that use reliability assessments for research

Jobs in many fields use researchers to find information and analyze data that can improve outcomes for a company, make better products for customers or increase public wellness. Most research jobs use some form of reliability testing to ensure their data is reliable and useful for their employers' purposes. Here are some careers that often test for reliability in data:

  • Media sociologist

  • Food scientists

  • Forensic science technicians

  • Marketing analysts

  • Medical scientists

  • Economists

  • Policy analysts

  • Behavioral scientists

  • Business analysts

Related: What Is Research in Business? (With Definition and Types)

4 Types of reliability in research

Depending on the type of research you're doing, you can choose between a few reliability assessments. The most common ways to check for reliability in research are:

1. Test-retest reliability

The test-retest reliability method in research involves giving a group of people the same test more than once over a set period of time. In this assessment, the research method and sample group stay the same, but when you administer the method to the group changes. If the results of the test are similar each time you give it to the sample group, that shows your research method is likely reliable and not influenced by external factors, like the sample group's mood or the day of the week.

Example: Give a group of college students a survey about their satisfaction with their school's parking lots on Monday and again on Friday, then compare the results to check the test-retest reliability.

2. Parallel forms reliability

When using parallel forms reliability to assess your research, you may give the same group of people multiple different types of tests to determine if the results stay the same when using different research methods. The theory behind this assessment is that consistent results across research methods ensure each method is looking for the same information from the group and the group is behaving similarly for each test. This means the methods are likely reliable because, if they weren't, the participants in the sample group may behave differently and change the results.

Example: In marketing, you may interview customers about a new product, observe them using the product and give them a survey about how easy the product is to use and compare these results as a parallel forms reliability test.

3. Inter-rater reliability

With inter-rater reliability testing, you may have multiple people performing assessments on a sample group and comparing their results to avoid influencing factors, like an assessor's personal bias, mood or human error. If most of the results from different assessors are similar, it's likely the research method is reliable and can produce usable research because the assessors gathered the same data from the group. This is useful for research methods like observations, interviews and surveys where each assessor may have different criteria but can still end up with similar research results.

Example: Multiple behavioral specialists may observe a group of children playing to determine their social and emotional development and then compare notes to check for inter-rater reliability.

4. Internal consistency reliability

There are two typical ways to check for internal consistency reliability in your research, which generally involves making sure your internal research methods or parts of research methods deliver the same results. One of those techniques is split-half reliability, and you can perform this test by splitting a research method, like a survey or test, in half, delivering both halves separately to a sample group and comparing the results to ensure the method can produce consistent results. If the results are consistent, then the results of the research method are likely reliable.

The other internal consistency test checks for average inter-item reliability. With this assessment, you administer sample groups multiple testing items, like with parallel forms reliability testing, and calculate the correlation between the results of each of the method results. With this information, you calculate the average and use the number to determine if the results are reliable.

Example: You may give a company's cleaning department a questionnaire about which cleaning products work the best, but you split it in half and give each half to the department separately and calculate the correlation to test for split-half reliability.

Later, you interview the members of the cleaning department, then bring them into small focus groups and observe them at work to determine which cleaning products get the most use and which people like best. You calculate the correlation between these answers and average the results to find the average inter-item reliability.

Related: What a Researcher's Work Is and How To Become One

Tips for testing the reliability of research methods

As you do research and review the results, consider the following tips for testing the reliability of your research methods and ensuring you have consistency in your work:

  • Plan ahead. Planning is a key step for most scientific experiments, so try to plan your research methods and studies in advance to ensure you and your team are prepared. You may plan for a space to conduct research with your sample group, how you can distribute testing materials or criteria and how to describe the purpose of the research to participants.

  • Make note of the environment. If you're conducting research with the same sample group multiple times, it's often a good idea to make a note of what the environment is like when the group undergoes testing. This is because certain factors, like whether it's raining, the room is cold or someone is coughing, can influence the group's willingness to participate fully.

  • Consider the participants. As you create your research materials, remember to consider how your sample group might respond to and understand the materials you present to them. For example, a group of children may need a simple set of survey questions read to them, while a group of adults could likely read more complicated questions themselves.

  • Review results thoroughly. While comparing the results of your research, review the results thoroughly to make sure you catch any errors and accurately determine the reliability of the results. You may even ask a colleague to examine the results with you and offer their opinion on the reliability of the data gathered by your research methods.

  • Think about the type of research. Different types of research may benefit from certain reliability tests more than others because each field of research measures different things. In marketing, you may regularly use various focus groups to determine a product's appeal, while a sociologist might observe behavior and compare notes regularly to get various professional opinions on a matter.