Better quality research is needed before artificial intelligence can be trusted to diagnose breast cancer in the full range of UK patients, according to the professional body for the IT industry.
While AI may become an important method of breast screening (and the detection of most cancers) in the near future, there is simply not good enough evidence yet, BCS, The Chartered institute for IT said.
There was a ‘significant risk of overdiagnosis’ should AI be adopted now in screening breast cancer, BCS added.
The Institute said in its formal response to a consultation by the UK National Screening Committee’s (UK NSC) that it supports the NCS’ proposal that the use of AI for image analysis in breast cancer screening should not be endorsed in the UK at present.
Dr Philip Scott, Chair of the BCS Health and Care Executive said: “The NSC review agrees with other recent studies that, while AI methods are promising, there is not yet enough scientific evidence to justify adoption in a cancer screening programme.”
Be part of something bigger, join the Chartered Institute for IT.
Risk of bias
“Unfortunately, there is so much hype about AI that some people treat it like magic. Most AI in healthcare is early stage and not shown to work clinically. If you look at the scientific reviews, the experiments done with AI diagnostic tools are simply not good enough. Many studies are at risk of bias from selective use of patient data.”
“If AI were adopted now in the screening of breast cancer, there is significant risk of overdiagnosis with all the anxiety that would cause. We need to educate and inform the public to maintain trust, and that includes being honest about the immaturity of most AI tools.”
Dr Scott, who is also Reader in Health Informatics at University of Portsmouth added: “BCS is keen to support the process in the future once it is evident that sufficient scientifically robust research has been done to ensure that AI Breast Screening will be safe for all who use it. AI has the potential to be of huge benefit or of huge harm to society, and standards for the design, development, and adoption of AI systems must be regulated to ensure we get the very best out of them.”
Clinical leaders and IT professionals working in digital healthcare will increasingly need to show evidence of commitment to ethics, competence and transparency to build public trust in the algorithms and AI used to make high stakes medical decisions, BCS said.