A fish labeled 'Accurate Data' swimming in a sea labeled 'Context'.

Coding/Rating Open-Ended Responses

Tip: Make the coding/rating procedures as anonymous as possible. Have participants put their names and any other identifying information at the end of their responses or on a separate page so coders/raters don't see it.

Rationale: Knowing the gender, race/ethnicity, and even the first name of the person whose work is being rated has been shown to impact ratings of: People were found to rate statements as less true when they were spoken by non-native speakers2; while academics rated job applications for lab managers and instructors with male names higher than the identical applications with female names, (although there were no differences in their ratings of tenure applications3).

Tip: Have more than one coder/rater code the responses and check for inter-coder/rater reliability. If the reliability is not high enough, have coders/raters discuss the rationale behind their ratings and recode.

Tip: Unless the coding is being based on grounded theory, develop and test the coding/rating protocol in advance of doing the data analysis.

Rationale: While some question whether it is possible to generate reliable codings/ratings of open-ended responses, others feel that inter-coder/rater reliability is a useful concept in settings characterized by applied, multidisciplinary, or team based work. Establishing high inter-coder/rater reliability is an attempt to reduce error and bias.4

1 Anderson-Clark, T., Green, R. & Henley, T. (2008). The relationship between first names and teacher expectations for achievement motivation. Journal of Language & Social Psychology, 27(1), 94-99.
Correll, S. J., & Benard, S. (2006). Biased estimators? Comparing status and statistical theories of gender discrimination. In S. R. Thye and E. J. Lawler (eds.), Advances in group processes(vol 23, pp. 89-116). New York, NY: Elsevier.
Pellegrini, A. D. (2011). “In the eye of the beholder”: Sex bias in observations and ratings of students' aggression. Educational Researcher, 40(6), 281-286. doi:10.3102/0013189X11421983
2 Lev-Ari, S., & Keysar, B. (2010). Why don't we believe non-native speakers? The influence of accent on credibility. Journal of Experimental Social Psychology, 46(6), 1093-1096. doi:10.1016/j.jesp.2010.05.025
3 Moss-Racusin, C. A., Dovidio, J. F, Brescoll, V. L., Graham, M. J., & Handselsman, J. (2012). Science faculty's subtle gender biases favor male students. PNAS Early Edition. 1-6.
Steinpreis, R. E., Anders, K. A., & Ritzke, D. (1999). The impact of gender on the review of curricula vitae of job applicants and tenure candidates: A national empirical study. Sex Roles, 41(7/8), 509-528. doi:10.1023/A:1018839203698
4 Hruschka, D.J., Schwartz, D., St. John, D.C., Picone-Decaro, E, Jenikns, R. A. & Carey, J.W. (2004). Field Methods, 16, 3, 307-331.