Peter Singleton considers the application of autonomy to what are commonly called ‘secondary uses’ of healthcare data.

The principle of autonomy, or respect for the individual and their right to self-determination is a well-established principle in medical ethics, forbidding treatments to which the patient has not consented, except in an emergency or in the interests of public welfare.

Generally, it is accepted that one ‘owns’ one’s own body and tissues, though little is said of what will happen to tissue samples when taken (presumed to be consumed in testing or destroyed, if surplus) and there are countries where organs may be presumed to be ‘donated’ after death without express consent.

When extending this to the use of healthcare data, a number of aspects change: data may be replicated and used rather than consumed, so there is no direct ‘loss’ through use of data - ownership is not reduced, though control may well be. Further privacy issues usually come to the fore - concern of what the data might potentially reveal when shared or used in a different context.

The area of risk shifts from a potential physical harm to a reputational risk that may impact on family and social standing. Generally, ‘ownership’ does not work well with healthcare data as the data is generated, managed and used by the healthcare provider and rarely (perhaps too rarely) by the data subject themselves.

Data protections laws may limit how data may be used, but usually only require express consent for unusual use. EU data protection law puts emphasis on ‘controllership’ rather than ‘ownership’.

Supporting choice

 It is clear that from an ethical perspective providing the data subject with a choice about how data about them (I avoid the term ‘their data’) is used is to be commended and supported where possible. If a person chooses to seek healthcare, then they must expect that information will be recorded about that treatment, especially where a doctor or nurse is seen to be taking notes or using a computer.

The level of information recorded may or may not be open to discussion - notes of a consultation are rarely verbatim, containing only the information necessary for legal defence and continuity of care. Some systems may incorporate ‘sealed envelopes’ allowing recorded data to be restricted by default to those immediately involved in the patient’s care.

Patients may also expect (or at least not be surprised by) a doctor sending details of the health status when referred to a specialist. While they could choose not to be referred, there may be little room for discussion over what clinical information is shared.

Similarly, routine uses of data such as clinical audit (checking that care has been provided in accordance with professional standards) are presumed to be necessary to the provision of safe and effective care. Sharing of data in relation to reimbursement is also considered necessary to the basic operation of healthcare, where not paid for directly by the patient.

There is a wide range of ancillary uses of healthcare data across a health economy as complex as the NHS, but it can be hard to decide what processing is fundamental to the provision of care (financing, resourcing, planning etc.), what is needed but does not require 100 per cent of data (so we could support an opt-out perhaps), and what is wholly optional (and so should be subject to express consent).

The recent hoo-hah over care.data illustrates how divided opinion can be. It could be argued that the Health & Social Care Act 2012 allowed the Health & Social Care Information Centre (HSCIC) to require the collection of this data without any form of choice.

The decision to permit an opt-out opened the argument to different methods of providing choice, e.g. opt-in rather than opt-out. Part of the problem was the lack of clear purpose to the data collection (at least in public materials) so that making an informed choice was difficult, let alone arguing what form of choice would be best.

Indeed, given the difficulty for the experienced data professional in this field to decide what the implications might be, then it can be hard to see how the average citizen can be expected to make an informed choice. This is not to argue for simply using data without any form of consent or control, merely that requiring consent in all circumstances may prove to be a burden on the individual rather than a freedom.

Anonymous versus pseudonymous data

 The current EU DP Directive and the US HIPAA1 regulations support the idea that ‘anonymous data’ can be processed without further consent. While HIPAA lays down how data may be anonymised (or sufficiently so that no consent is needed), the EU Directive requires that data should not allow the identification of an individual, so leaving the burden of proof on the data controller.

The UK Information Commissioner’s Office (ICO) has produced a code of practice on anonymisation2, but apart from explaining some of the issues involved, it provides no clear guidance as to exactly what to do beyond applying a ‘motivated intruder’ test.

The recent draft EU DP Regulation (launched January 2012) has focused attention on the question of recognising ‘pseudonymous data’ (where the identity of the individual has been obscured and steps taken to limit, though not necessarily remove, the risk of re-identification) and allowing the further processing of such data without further consent, though perhaps with some limitations as to the scope or purposes of such processing.

HIPAA lays down explicit rules to create ‘de-identified health information’, which does not require further consent. While the rules do not make the data assuredly ‘anonymous’ they do prevent the data from being readily identified, which significantly reduces the risk of re-identification, though is not necessarily proof against the ‘motivated intruder’.

This can be considered a compromise between the ‘consent or anonymise’ options whereby data may be used to practical benefits without incurring large transactions costs (for all parties) through needing to obtain further consent.

Requiring consent for all data processing is an ideal ethical position in terms of respecting the autonomy of the individual, and is probably the motivation behind some of the suggested changes to the draft EU DP Regulation proposed by the European Parliament to try to counter the perceived abuses of personal data on the internet.

This autonomy may need to be compromised to provide wider public benefits (a safe effective healthcare system), but this further processing should seek to limit the privacy risks to individuals, ideally by using anonymous or otherwise pseudonymous data. 

In the healthcare arena (and possibly others), there may be a range of providers as well as a complex range of data processing that may be needed to support supply chain as well as regulatory and safety requirements.

This can mean that an effective ‘consent process’ would be costly in terms of education prior to consent, both for the individual (to understand the issues and consequences) and the data processor (especially if there are low acceptance rates), so alternatives to consent are needed (apart from perhaps just ‘not processing’).

An effective regime for ‘pseudonymous data’ is needed as an alternative to simply processing fully identifiable information.