The legal assumption that computer evidence is always right was one of the many reasons why over 700 sub-postmasters were wrongly prosecuted. Dr Sam De Silva CITP FBCS explains how the law works, why it may have been drafted this way, and how it needs changing.
The Post Office Horizon IT scandal is a human tragedy of unparalleled scale. Last month, hundreds of postmasters whose lives were ruined by the scandal saw their wrongful criminal convictions quashed.
This is not, of course, the end of the scandal or the story. The Post Office Horizon IT Inquiry — an independent public statutory inquiry established to gather a clear account of the implementation and failings of the Horizon IT system at the Post Office — continues to hear evidence.
As you read on, BCS’ Dr Sam De Silva FBCS explores what part the legal assumption that ‘the computer is always right’ could have played in the scandal.
Speaking to BCS, Sam said: ‘If the Post Office had been required to prove that its computer system was operating reliably the case may well have had a different outcome.’
So, why don’t you introduce yourself?
I am a qualified solicitor and a partner at a global law firm, CMS Cameron McKenna Nabarro Olswang LLP (CMS). I am also the co-head of CMS’ global commercial practice group which consists of about 500 commercial lawyers across the world.
Given my specialism in technology and law I became involved in the BCS Law Specialist Group and was elected as its chairperson in 2020. I’m also a member of BCS Fellows Technical Advisory Group (F-TAG) and sit on BCS’ Influence Board.
What does the law say today about computer evidence in trials?
The common law rule is that a computer producing evidential records is working properly at the material time and that the record is therefore admissible as real evidence, unless there was evidence to the contrary. Therefore, evidence produced by computers is treated as reliable unless other evidence suggests otherwise. This way of handling evidence is known as a ‘rebuttable presumption’. A court will treat a computer as if it is working perfectly unless someone can show that that is not the case.
Did the law always assume computer evidence was correct by default?
No. During the 1980s, when computers began to be used in everyday life, it was necessary to consider how evidence in electronic form was to be presented in criminal proceedings.
At a high level, in criminal proceedings, evidence from a computer document was considered indirect or 'hearsay' evidence. Courts were cautious with such evidence as it was based on second hand knowledge, because the person presenting it did not have firsthand experience of the information. In consequence, the general rule was (and still is) that ‘hearsay’ evidence in criminal proceedings is not admissible unless one of the exceptions to the rule against hearsay applies.
Section 69 of the Police and Criminal Evidence Act 1984 (PACE 1984) provided a solution to the admissibility of computer evidence. The section said that, before any statement in a document produced by the computer could be admitted in evidence, the prosecution should prove that a computer was operating properly. It was also necessary to show that the computer wasn’t being used improperly at the relevant time.
So the prosecution had to prove a computer was working correctly when the evidence was gathered. Why was this position changed?
It was changed because the volumes of computer evidence increased and, as a result, the requirement under PACE 1984 became burdensome and inconvenient.
In 1995, the Law Commission released a consultation paper, Evidence in Criminal Proceedings: Hearsay and Related Topics – A Consultation Paper, which provisionally proposed that section 69 of PACE 1984 should be repealed without replacement.
A repeal recommendation shortly followed this in 1997.
In 2000, section 69 of PACE 1984 was repealed by section 60 of the Youth Justice and Criminal Evidence Act 1999 (YJCEA 1999), without replacement.
In summary, in 2000, the Law Commission’s 1995 recommendations in relation to the admissibility of computer generated evidence were adopted.
From the sub-postmasters’ perspective, did the shift in the burden of proof make their defences harder?
A presumption that a computer is ‘working properly’ is wholly unrealistic for anyone with computer science or software engineering expertise. It requires a binary answer — a ‘yes or no’ as to whether a computer is working correctly or not. The assumption also assumes that the answer is trivially easy to give.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
The reality is, of course, far more complex. All computers have a propensity to fail, possibly seriously. All computer systems contain bugs, errors or defects, and some of these may rarely reveal themselves in any obvious or noticeable way, because they can masquerade as normal behaviour.
This leads us to disclosure, and this is where it can get difficult. Courts tend to reject general or unfocused disclosure requests during criminal proceedings as ‘fishing expeditions’.
Rather, a person challenging evidence derived from a computer must specifically identify the issue to which the disclosure request is relevant. In practice, the defence is often unable to demonstrate relevance because they will not have been privy to the circumstances in which the system in question is known to fail or has failed. Or because it is not immediately obvious there has been any such failure.
Before we go on, what is disclosure?
If we are talking about standard disclosure, this is where each party is obliged to provide to their opponents a list of all documents which both:
- Help or hinder the case of any party
- Are, or have been, in the party's control
A party is also obliged to permit inspection of the disclosed documents within their control, unless they have a right or a duty to withhold inspection, or inspection would be disproportionate to the matters in issue.
Can a prosecutor keep information about bugs secret and leave the defence to flounder in the dark?
The solicitor has a duty to ensure that all disclosable documents are made available to the court.
What can happen if a company is found to have misled a court by not disclosing or disclosing misleadingly?
Usually, parties to litigation in England disclose documents to each other by way of a list of documents, and a disclosure statement must verify this list. This is a statement made by the party disclosing the documents that:
- Sets out the extent of the search that has been made to locate documents which they are required to disclose
- Certifies that they understand the duty to disclose documents
- Certifies that to the best of their knowledge, they have carried out that duty
A suspicion that documents that should have been disclosed have been withheld is, in effect, a suspicion that a false disclosure statement has been signed. Contempt proceedings may be brought against a person who makes, or causes to be made, a false disclosure statement without an honest belief in its truth.
What could happen when AI-based evidence appears at a criminal trial? AI systems can be so complex that their makers can’t explain how they came to a conclusion. If the law on computer evidence stays as it is, could we end up with a defendant having to prove the AI was wrong, and the prosecution remaining unable to explain how the system works?
In the EU this may not be as problematic as it seems. Under the EU AI Act, providers and deployers of so-called ‘high-risk’ AI systems will be subject to significant regulatory obligations when the act takes effect, with enhanced thresholds of diligence, initial risk assessment and transparency. In addition, the act contains a long list of examples cited in which AI systems used in a law enforcement context could be considered ‘high-risk’ AI. The list includes AI systems intended to be used to assess individuals’ risks of offending or re-offending. It also covers AI systems used to evaluate the reliability of evidence in criminal investigations or prosecutions, or for profiling individuals in the course of detection, investigation or prosecution of criminal offences.
However, in the UK, the use of AI-based evidence is likely to be problematic if the law of evidence remains unchanged and/or AI specific legislation is not introduced.
Finally, how did you become involved with BCS?
I first got involved with BCS when I was asked to speak at an event many years ago! I think that went well (as I continued to get asked to speak at other BCS events). I then applied for membership, eventually gaining Chartered IT Professional Status and Fellowship. I continued to speak at various BCS event and then in 2016 decided I wanted to ‘give something back’. I got elected to the BCS Council in 2016 and have remained on Council until 2024 (including 1 year as Vice-Chair). I was also a Council elected Trustee of BCS from 2019 to 2024.