Abstract
Summary of Recommendations
• Taking ‘Computer Evidence’ Seriously: The original provision from the Police and
Criminal Evidence Act 1984 (PACE) s.69 as well as the succeeding presumption
introduced by the Law Commission are simply outdated for the technological
advances of today. Many aspects of the criminal justice system are implementing
probabilistic algorithmic models (AI systems that uses machine learning to make
decisions based on probabilities or likelihoods), with the uncertain outputs being used
in evidentiary proceedings. This goes beyond the computerisation of records and
production of documents. The various uses of software-generated output include
biometric technologies to identify suspects, Generative AI to produce witness
statements and crime reports, and risk-prediction tools which can be used to identify
individuals and areas of interest to the police. We therefore propose that the terms ‘digital evidence’ and ‘software-generated digital evidence’ be used instead of ‘computer evidence’. Here, software-generated digital evidence refers to digital evidence that is the direct output of a computer process (as opposed to digital evidence that is a recording of human action or natural phenomena). We suggest that the statute or guidance could define “softwaregenerated digital evidence” as “information intended to be relied upon as evidence of a fact, which information was produced by the application of computer software or an automated algorithm, rather than by direct human perception or input.” Specifically, we propose a three-tier hierarchical framework – complex probabilistic systems, semi-automated technologies, and regulated, rule-based devices – designed to calibrate legal scrutiny to the complexity, transparency, and regulatory oversight of each technology.
• Shifting the Burden of Proof to Prosecution and Expert Witnesses: The current
presumption undermines the presumption of innocence, as illustrated in the Post-
Office’s Horizon technology scandal. Currently, the respective computer system is
assumed to be right, unless the defendant can prove otherwise. Many algorithmic
systems operate as a ‘black box’1, and machines can be constantly learning. Even a
software expert may find it challenging to explain the workings of an algorithm, let
alone a defendant who may lack such expertise, thereby posing challenges of
evidentiary reliability. Whilst reverse burden of proof clauses (also known as reverse
onus clauses) are not per se unlawful, this particular one is not only unmanageable for the defendant but deleterious for the legal system. This became clear recently from
the inability of the sub-postmasters to rebut the presumption (that Fujitsu’s Horizon
system was working correctly) and prove their innocence. It is thus important to shift
the burden of proof on software-generated digital evidence. This would also involve
emphasising the need for prosecution to acknowledge that errors of probabilistic
technology constitute important exculpatory evidence. With the proven biases and
errors associated with software-generated outputs, including biometric recognition
technologies, it is important that the focus of proving the reliability of a system should
be redirected to software developers and to the prosecution. This includes the need
for the Ministry of Justice to ensure specialist expertise in software engineering, digital
forensics, and related fields for court cases involving software-generated outputs.
• Increased Scrutiny of Digital Evidence: There should be robust disclosure obligations
regarding the reliability of digital systems used to produce evidentiary outputs. The
manufacturer of the technology, the user (the police or the prosecution) should
provide reports of errors or problems, maintenance and update records, transaction
logs, and information on whether the system has any remote access or human
interventions. This increased scrutiny should also extend to courts, incorporating pretrial case management processes for the assessment of evidence, such as reliability hearings where the prosecution must produce evidence from software-generated outputs. As a result, the trial process involving digital evidence can be streamlined.
• Auditability and Record-keeping: To ensure that the evidential output generated by
the program is functioning as required, the system (specifically the complex probabilistic ones) should undergo a quality assurance or certification process. A detailed record of logs, audit trails, and accuracy tests can be presented in court to
ensure accountability for how the system operates and produces outputs. It is
important that this audit is conducted independently. An example of ensuring this
legitimacy could be through certifications from industry standards. Therefore, there is
an urgent need for an overarching regulatory framework (either through the
establishment of a new regulatory body or the restructuring of existing bodies) to
provide central oversight, accreditation, and continuous monitoring of algorithmic
tools used in evidentiary contexts. The level of scrutiny on digital evidence should be
dependent on the complexities and ambiguities associated with the technology. In
addition, if documents have been produced using an algorithmic system, such as a
police incident report or a witness statement, there should be an obligation to include
a statement that clarifies the use of the algorithmic system (such as a Generative AI
tool).
• Monitoring and Continuous Improvement: It is important that the findings from the
reforms to digital evidence are continuously monitored and assessed. Adequate
reviews ensure that our legal system will not fall behind the advancements of
emerging technologies. As a result, reforms should not be stagnant. As we have seen,
the Law Commission’s presumption remained in place for many years and led to
miscarriages of justice. The implementation of reforms entails significant resource
implications. The expertise required to facilitate adequate knowledge of new
technologies, including judicial training, can be costly. Therefore, the reforms should
be well-planned in terms of budgeting. It is not enough to implement a reform; those
who are affected by the reforms should have the adequate resources in order for the
reform to be effective.
• Taking ‘Computer Evidence’ Seriously: The original provision from the Police and
Criminal Evidence Act 1984 (PACE) s.69 as well as the succeeding presumption
introduced by the Law Commission are simply outdated for the technological
advances of today. Many aspects of the criminal justice system are implementing
probabilistic algorithmic models (AI systems that uses machine learning to make
decisions based on probabilities or likelihoods), with the uncertain outputs being used
in evidentiary proceedings. This goes beyond the computerisation of records and
production of documents. The various uses of software-generated output include
biometric technologies to identify suspects, Generative AI to produce witness
statements and crime reports, and risk-prediction tools which can be used to identify
individuals and areas of interest to the police. We therefore propose that the terms ‘digital evidence’ and ‘software-generated digital evidence’ be used instead of ‘computer evidence’. Here, software-generated digital evidence refers to digital evidence that is the direct output of a computer process (as opposed to digital evidence that is a recording of human action or natural phenomena). We suggest that the statute or guidance could define “softwaregenerated digital evidence” as “information intended to be relied upon as evidence of a fact, which information was produced by the application of computer software or an automated algorithm, rather than by direct human perception or input.” Specifically, we propose a three-tier hierarchical framework – complex probabilistic systems, semi-automated technologies, and regulated, rule-based devices – designed to calibrate legal scrutiny to the complexity, transparency, and regulatory oversight of each technology.
• Shifting the Burden of Proof to Prosecution and Expert Witnesses: The current
presumption undermines the presumption of innocence, as illustrated in the Post-
Office’s Horizon technology scandal. Currently, the respective computer system is
assumed to be right, unless the defendant can prove otherwise. Many algorithmic
systems operate as a ‘black box’1, and machines can be constantly learning. Even a
software expert may find it challenging to explain the workings of an algorithm, let
alone a defendant who may lack such expertise, thereby posing challenges of
evidentiary reliability. Whilst reverse burden of proof clauses (also known as reverse
onus clauses) are not per se unlawful, this particular one is not only unmanageable for the defendant but deleterious for the legal system. This became clear recently from
the inability of the sub-postmasters to rebut the presumption (that Fujitsu’s Horizon
system was working correctly) and prove their innocence. It is thus important to shift
the burden of proof on software-generated digital evidence. This would also involve
emphasising the need for prosecution to acknowledge that errors of probabilistic
technology constitute important exculpatory evidence. With the proven biases and
errors associated with software-generated outputs, including biometric recognition
technologies, it is important that the focus of proving the reliability of a system should
be redirected to software developers and to the prosecution. This includes the need
for the Ministry of Justice to ensure specialist expertise in software engineering, digital
forensics, and related fields for court cases involving software-generated outputs.
• Increased Scrutiny of Digital Evidence: There should be robust disclosure obligations
regarding the reliability of digital systems used to produce evidentiary outputs. The
manufacturer of the technology, the user (the police or the prosecution) should
provide reports of errors or problems, maintenance and update records, transaction
logs, and information on whether the system has any remote access or human
interventions. This increased scrutiny should also extend to courts, incorporating pretrial case management processes for the assessment of evidence, such as reliability hearings where the prosecution must produce evidence from software-generated outputs. As a result, the trial process involving digital evidence can be streamlined.
• Auditability and Record-keeping: To ensure that the evidential output generated by
the program is functioning as required, the system (specifically the complex probabilistic ones) should undergo a quality assurance or certification process. A detailed record of logs, audit trails, and accuracy tests can be presented in court to
ensure accountability for how the system operates and produces outputs. It is
important that this audit is conducted independently. An example of ensuring this
legitimacy could be through certifications from industry standards. Therefore, there is
an urgent need for an overarching regulatory framework (either through the
establishment of a new regulatory body or the restructuring of existing bodies) to
provide central oversight, accreditation, and continuous monitoring of algorithmic
tools used in evidentiary contexts. The level of scrutiny on digital evidence should be
dependent on the complexities and ambiguities associated with the technology. In
addition, if documents have been produced using an algorithmic system, such as a
police incident report or a witness statement, there should be an obligation to include
a statement that clarifies the use of the algorithmic system (such as a Generative AI
tool).
• Monitoring and Continuous Improvement: It is important that the findings from the
reforms to digital evidence are continuously monitored and assessed. Adequate
reviews ensure that our legal system will not fall behind the advancements of
emerging technologies. As a result, reforms should not be stagnant. As we have seen,
the Law Commission’s presumption remained in place for many years and led to
miscarriages of justice. The implementation of reforms entails significant resource
implications. The expertise required to facilitate adequate knowledge of new
technologies, including judicial training, can be costly. Therefore, the reforms should
be well-planned in terms of budgeting. It is not enough to implement a reform; those
who are affected by the reforms should have the adequate resources in order for the
reform to be effective.
Original language | English |
---|---|
Publisher | Ministry of Justice UK |
Number of pages | 24 |
Publication status | Submitted - 14 Apr 2025 |