The inter-rater reliability of mental capacity assessments.
Raymont V., Buchanan A., David AS., Hayward P., Wessely S., Hotopf M.
BACKGROUND: Assessing mental capacity involves complex judgements, and there is little available information on inter-rater reliability of capacity assessments. Assessment tools have been devised in order to offer guidelines. We aimed to assess the inter-rater reliability of judgements made by a panel of experts judging the same interview transcripts where mental capacity had been assessed. METHOD: We performed a cross sectional study of consecutive acute general medical inpatients in a teaching hospital. Patients had a clinical interview and were assessed using the MacArthur Competence Assessment Tool for Treatment (MacCAT-T) and Thinking Rationally About Treatment (TRAT), two capacity assessment interviews. The assessment was audiotaped and transcribed. The raters were asked to judge whether they thought that the patient had mental capacity based on the transcript. We then divided participants into three groups - those in whom there was unanimous agreement that they had capacity; those in whom there was disagreement; and those in whom there was unanimous agreement that they lacked capacity. RESULTS: We interviewed 40 patients. We found a high level of agreement between raters' assessments (mean kappa=0.76). Those thought unanimously to have capacity were more cognitively intact, more likely to be living independently and performed consistently better on all subtests of the two capacity tools, compared with those who were unanimously thought not to have capacity. The group in whom there was disagreement fell in between. CONCLUSIONS: This study indicates that clinicians can rate mental capacity with a good level of consistency.