Technical Note | | |
Kappa test
Selim Kılıç.  |  | Cited By: 41 | | Abstract Kappa coefficient is a statistic which measures inter-rater agreement for categorical items. It is generally thought to be a more robust measure than simple percent agreement calculation, since ? takes into account the agreement occurring by chance. Cohens kappa measures agreement between two raters only but Fleiss kappa is used when there are more than two raters. ? may have a value between -1 and +1. A value of kappa equal to +1 implies perfect agreement between the two raters, while that of -1 implies perfect disagreement. If kappa assumes the value 0, then this implies that there is no relationship between the ratings of the two observers, and any agreement or disagreement is due to chance alone. Key words: observer, agreement, due to chance
| |
|
|
REFERENCES |
1. Cohen J. A coefficient of agreement for nominal scales, Educational and Psychological Measurement. 1960; 20:37-46. [DOI via Crossref] | | 2. Gordis L. Epidemiology Fifth Edition, Elsevier Saunders Inc., 2014, 107-10. | | 3. Fleiss JL. Measuring nominal scale agreement among many raters. Psychological Bulletin. 1971;7:378-382. [DOI via Crossref] | | 4. Dawson B, Trap RG. Basic and Clinical Biostatistics, Lange Medical Books/McGraw-Hill, Third Edition, 2004;115-16. | | 5. Sim J; Wright CC. The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements. Physical Therapy. 2005;85:257-68. [Pubmed] | | 6. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics.1977;33:159-74. [DOI via Crossref] | | 7. Fleiss JL. Statistical Methods for Rates and Proportions. 2nd ed. New York, NY: John Wiley & Sons Inc; 1981 . [PMC Free Fulltext] | | 8. Gwet K. Handbook of Inter-Rater Reliability (2.Ed.), 2010. | | 9. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Family Medicine. 2005;37:360-3. [Pubmed] | |
|
|