Quality Assurance/Quality Control
Last modified on Jan 21, 2021
Inter- and Intra-rater reliability
There are many potential sources of error in any research project, and the extent that these errors can be minimized increases confidence in the study results. There are two categories of reliability with respect to scale readers: reliability across multiple scale readers, or inter-rater reliability, and reliability of a single scale reader, or intra-rater reliability. Presented with the same situation and phenomenon, the assumption is that a scale reader would react the same way every time; however, Gwet (2014) provided examples where this was false and altered intra-rater reliability. Reader reliability is affected by the fineness of discriminations required by the samples. If a variable only has two possible states, and the states are sharply differentiated, reliability is likely to be high. For example, if a fish either survived or did not, or an otolith is marked or not, there is likely to be high reliability among readers. On the other hand, if readers must discriminate among items, such as width and number of scale annuli, both inter- and intra-rater reliability decline. Careful training of scale age estimation readers is critical to increase reader reliability.
- Age error assessment
- Examples of labs accuracy and precision assessments
- Quality control monitoring
Gwet, K.L., 2014. Handbook of inter-rater reliability. Advanced Analytics, Gaithersburg, MD.