Measures of Interobserver Agreement and Reliability(English, Hardcover, Shoukri Mohamed M.)
Quick Overview
Product Price Comparison
Agreement among at least two evaluators is an issue of prime importance to statisticians, clinicians, epidemiologists, psychologists, and many other scientists. Measuring interobserver agreement is a method used to evaluate inconsistencies in findings from different evaluators who collect the same or similar information. Highlighting applications over theory, Measure of Interobserver Agreement provides a comprehensive survey of this method and includes standards and directions on how to run sound reliability and agreement studies in clinical settings and other types of investigations. The author clearly explains how to reduce measurement error, presents numerous practical examples of the interobserver agreement approach, and emphasizes measures of agreement among raters for categorical assessments. The models and methods are considered in two different but closely related contexts: 1) assessing agreement among several raters where the response variable is continuous and 2) where there is a prior decision by the investigators to use categorical scales to judge the subjects enrolled in the study. While the author thoroughly discusses the practical and theoretical issues of case 1, a major portion of this book is devoted to case 2. He explores issues such as two raters randomly judging a group of subjects, interrater bias and its connection to marginal homogeneity, and statistical issues in determining sample size. Statistical analysis of real and hypothetical datasets are presented to demonstrate the various applications of the models in repeatability and validation studies. To help with problem solving, the monograph includes SAS code, both within the book and on the CRC Web site. The author presents information with the right amount mathematical details, making this a cohesive book that reflects new research and the latest developments in the field.