How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child Welfare
Abstract
CHI EA '22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
Child welfare (CW) agencies use risk assessment tools as a means to achieve evidence-based, consistent, and unbiased decision-making. These risk assessments act as data collection mechanisms and have been further developed into algorithmic systems in recent years. Moreover, several of these algorithms have reinforced biased theoretical constructs and predictors because of the easy availability of structured assessment data. In this study, we critically examine the Washington Assessment of Risk Model (WARM), a prominent risk assessment tool that has been adopted by over 30 states in the United States and has been repurposed into more complex algorithmic systems. We compared WARM against the narrative coding of casenotes written by caseworkers who used WARM. We found significant discrepancies between the casenotes and WARM data where WARM scores did not not mirror caseworkers’ notes about family risk. We provide the SIGCHI community with some initial findings from the quantitative de-construction of a child-welfare risk assessment algorithm.
Document Type
Article
Publication Date
4-2022
Recommended Citation
Saxena, Devansh; Repaci, Charles; Sage, Melanie D.; and Guha, Shion PhD, "How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child Welfare" (2022). Health Services and Informatics Research. 89.
https://researchrepository.parkviewhealth.org/informatics/89