Risk assessment and algorithmic tools have become increasingly popular in recent years, particularly with respect to detention and incarceration decisions. The emergence of big data and the increased sophistication of algorithmic design hold the promise of more accurately predicting whether an individual is dangerous or a flight risk, overcoming human bias in decision-making, and reducing detention without compromising public safety. But these tools also carry the potential to exacerbate racial disparities in incarceration, create a false veneer of objective scientific accuracy, and spawn opaque decision-making by “black box” computer programs.
While scholars have focused much attention on how judges in criminal cases use risk assessment to inform pretrial detention decisions, they have paid little attention to whether immigration judges should use risk assessment when deciding whether to detain noncitizens. Yet, the federal immigration detention system is one of the largest in the world, incarcerating nearly 400,000 noncitizens a year. Immigration courts contribute to unnecessary detention and deprivation of liberty due to serious structural flaws. Immigration judges are prone to racial bias, they focus on factors unrelated to danger and flight risk, their bond decisions are nontransparent and opaque, and they are subject to undue political influence that encourages judges to err on the side of detention rather than release.
Given the rise of algorithmic decision-making, the time has come to investigate whether risk assessment has a role to play in immigration court bond decisions. This Article suggests that while there is no easy answer, a well-designed and transparent risk assessment tool could provide a check against the worst features of the current immigration court bond system. Alternatively, even if risk assessment tools prove to be flawed, the information obtained from using them could provide support for broader reform of immigration detention.