Abstract
In recent years, several UK police forces have increasingly adopted AI-driven tools for analysing biodata, including biometric recognition, to enhance crime investigation and prediction. These systems, while offering significant efficiencies, operate on probabilistic outputs that can carry inherent uncertainties and biases. When these outputs are linked or “chained” across a sequence of AI-driven decisions, errors and biases can be amplified, leading to cascading effects throughout the criminal justice process. These raises critical concerns about reliability and fairness, particularly in evidential contexts. This paper explores these challenges by drawing lessons from the historical and contemporary use of tools–such as polygraphs and DNA technologies–similarly marked by probabilistic outputs and contested evidential values. It compares the UK approach with the Australian experience, highlighting differences in legal standards, evidential acceptance, and governance frameworks. This is particularly relevant in light of ongoing discussions within the UK regarding proposed changes to the law on the admissibility of computer-generated evidence. The paper argues for a responsible AI governance model that learns from past technologies, emphasising transparency, accountability, and continuous oversight to mitigate risks associated with “chaining” AI systems in biodata analysis.
Original language | English |
---|---|
Publication status | Accepted/In press - 3 Apr 2025 |
Event | Biodata, Surveillance and Society - University of Oslo, Norway Duration: 20 Nov 2025 → 21 Nov 2025 https://www.jus.uio.no/ikrs/english/research/projects/digitaldna/events/conference-biodata-surveillance-society-2025.html |
Conference
Conference | Biodata, Surveillance and Society |
---|---|
Country/Territory | Norway |
Period | 20/11/25 → 21/11/25 |
Internet address |