Cicchetti (1994) gives the following interpretive guidelines, often cited, for Kappa or LCC boarding school measures: Author`s Articles: All the authors worked together on this manuscript. In particular, JYL, WT and XMT made significant contributions to the correlation section, GQC, YL and CYF made significant contributions to the section of the agreement, and JYL and XMT designed and finalized the manuscript. All authors have read and approved the final manuscript. Note that contrary to the correlation in the appreciation of the agreement, there is no question of linear and non-linear association. This is because a good agreement requires an approximate linear relationship between the results. For example, in the case of two advisors, good agreement requires that yi1 and yi2 be close, such as yi1 – yi2 in case of perfect agreement. Agreement or reproducibility is another widely used approach to assessing the relationship between outcomes. As indicated in the introduction, the variables that are taken into account in the correlation analysis must measure the same construction, unlike the variables considered for an agreement. Conversely, the correlation measures that are taken into account in Section 2 generally do not apply to an agreement.
Summary: agreement and correlation are widely used concepts that assess the allocation of variables. Although they are similar and related, they represent totally different conceptions of association. The assessment of the agreement between the variables assumes that the variables measure the same construction, while the correlation can be evaluated for variables that measure completely different constructions. This conceptual difference requires the use of different statistical methods and, in assessing match or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of the correlation between continuous variables, is instructive only if applied to variables with linear ratios. it cannot be instructive, or even misleading, when applied to variables that are not linearly related to each other. Similarly, intraclassificate correlation, which is a popular measure of the agreement between continuous variables, may not provide sufficient information to reviewers if the type of bad agreement is interesting. This report examines the concepts of agreement and correlation and examines differences in the application of several frequently used measures.