Simon Rogerson FBCS explores recent ethical issues surrounding digital technology, illustrating the need to ensure everyone’s digital ethics awareness.
There now exists deep-seated global dependency on digital technology. You need look no further than the pandemic, when the social glue came unstuck and we turned to digital technology to allow us to live and keep us connected. Communication channels provided information about the latest developments, advice and restrictions. Social media kept social groups and families emotionally together. Online outlets provided products and services we needed in our everyday lives. For digital natives, the move to the virtual was plausible, and even welcome, but for digital outcasts the move was fraught and frequently frightening.
Such global reliance upon digital technology requires greater emphasis on digital ethics by all of us. There is a need to develop a new vision for digital ethics which is theoretically grounded but pragmatic. If we don’t, then a very bleak, discriminatory world beckons for us all. It would be one of privileged digital natives and an underclass of digital outcasts, resulting in a world of danger, domination and despair.
There are and will always be ethical hotspots associated with existing and emerging digital technology. Some will be obvious whilst others may not. However, all must be addressed so that digital technology applications might become universally acceptable. It is important that we not only focus our ethical attention on the digital technology currently in the media spotlight but also on those which are more mundane and less newsworthy, but nevertheless ethically charged. Let us consider just four example ethical hotspots which have recently occurred or have reoccurred.
In 2005, I called for the establishment of information provenance within internet search engines. I argued that this would fix the origin and network of information ownership, thus providing a measure of integrity, authenticity and trustworthiness. It would provide us with an audit trail showing where information originated, where it had been altered and how it had been altered. In this way we would be able to consider how much credence we should give to a piece of information before acting upon it. It never happened!
Writing in the latest RSA Journal, Nina Schick calls for the use of information provenance, this time to address concerns about generative AI. Generative AI is the latest headlining digital technology, with ChatGPT its frontrunner. The latest statistics suggest ChatGPT has around 200 million users leading to around 60 million daily visits. Concerns are growing about the information integrity of this platform because the generated information relies on statistical probability within the data sets, which are likely to contain many false items. The introduction of visible content authentication will enable us to decide whether or not to trust our AI companion. Do you think this call will be listened to?
Health and safety
Healthcare is important to us all. There are some interesting initiatives to use digital technology for routine healthcare tasks, freeing up time for healthcare professionals to concentrate on more complex tasks. An NHS Digital Health Check service is to be launched which is designed to free up GP appointments and reduce NHS waiting times. Patients will be able to perform their own health checks online using a smartphone, tablet or laptop. It is unclear what will happen if you are a digital outcast. Monitoring and evaluation of patients along the complete healthcare journey from admission through hospitalisation to home aftercare is the subject of several research and development AI-oriented projects which could have a huge positive impact on our healthcare in general.
Be part of something bigger, join BCS, The Chartered Institute for IT.
It is important to us all that we can call for assistance in times of emergency. We all rely on digital communication in these circumstances. There have been two recent incidents where the 999 emergency call service has been disrupted due to system error. On Sunday 23 June 2023, 999 calls were delayed because of a technical fault at BT. Around the same time it was reported that a new feature on some Android smartphones was creating high volumes of inadvertent silent 999 calls. In effect this created a type of denial of service attack through these bogus calls.
On 29 June, the Communications and Digital Committee published its Digital Exclusion report . It states that ‘1.7 million households have no mobile or broadband internet at home. Up to a million people have cut back or cancelled internet packages in the past year as cost of living challenges bite. Around 2.4 million people are unable to complete a single basic task to get online, such as opening an internet browser. Over 5 million employed adults cannot complete essential digital work tasks.’
Concerns are being raised about developing and using digital technology which sustains and even promotes exclusion, albeit by accident rather than design. Age, socio-economic status, disability and regional location are all factors which contribute to the digital divide. Only political proactive action rather than reactive action will offer a way to bridge this divide. The report mentions that it is 9 years since the UK Government last published a digital inclusion strategy — this is hardly proactive.
We have all seen the protracted case of the wrongful conviction of sub-postmasters and sub-postmistresses because of a software system failure cover-up. In 2020, many of the convictions of 736 sub-postmasters and sub-postmistresses were overturned as it was proved that the Horizon digital accounting system installed by the Post Office in 1999 had many faults which led to the massive financial discrepancies that were wrongly attributed to Post Office employees. Unverified digital forensics was the only evidence used for conviction — clearly an unethical action.
Phase four of the Post Office Horizon inquiry has just begun. This will focus mainly on the role of lawyers as they had been involved from the beginning of the Horizon project drafting, for example, system testing contracts. Professor Richard Moorhead believes that incentives in some contracts may have encouraged Fujitsu engineers to act unethically. This fourth phase illustrates that digital technology development and operation are not isolated activities. They involve many disciplines and as such are likely to create complex ethical situations which we must address.
The global population has a collective view of an acceptable digital age. This population will undoubtedly include those who have suffered directly as a result of unethical situations. Examples include the hesitant user of a web-based public service who is the victim of poor system design; the junior software engineer who is pressurised into unethical, yet commercially valuable, action by an internet organisation; and the vulnerable young adult who is the victim of incessant cyberbullying.
The complex interrelated ethical and social issues within the digital environment must be addressed during the digital technology process and embedded within the digital technology product. Digital education from an early age should engender virtue, wisdom and humility as well as instrumental skill and technological prowess. It is digital ethics, education and awareness which will develop our confidence and skills, through lifelong learning, and so provide the tools to enable us to act responsibly and ethically in the digital age.