Reducing the risk

July 2010

Doctor getting dataThe centralised data sharing model is outdated, high-risk and high-cost says Justin Anderson, CEO of risk intelligence company Flexeye.

At the heart of the last government’s £12 billion NHS Connecting for Health programme is the controversial Summary Care Records system, which will, unless it is scrapped by the new coalition, hold the medical details of more than 50 million patients.

Experts have long argued against the pooling of such sensitive information, and quite rightly so. Any system that has to copy large amounts of information to a central database is outdated, costly, difficult to roll-out and poses significant security risks to the information it contains.

Most of us would be shocked at the concept of Google copying all of the information that exists on the web into a huge central data warehouse just so that we could search for those specific pieces of information that we so readily demand. So why then do we not jump up and down in anguish when the government does the same thing with the masses of information it stores? 

Despite two years of evaluation of this central system (at significant cost to the taxpayer), there have been many reports criticising the SCR system since it was first mooted in 2006. Indeed, Tony Collins’ blog this week highlights confidential information suggesting that even during the evaluation stage there was insufficient evidence to support the roll-out of such a complex and high profile system.

The real problem of centralising so much data from so many bespoke systems is that it broadens access rights to an ocean of information. The reality, however, is that most requests for patient information are task specific, which means that they need to see only a small subset of all the information that will exist in the new central records database.

For example, the A&E department wants access to essential details such as basic patient identifiers, allergies and current medication but doesn’t need to know that the patient had measles when they were three years old, at least not immediately. Similarly, the pharmacy needs to view the patient address and name and the current prescription and not the entire medical history of the person.

Localised access and authentication

Acknowledging that only subsets of the information in a central database are required by any one partner in the network, is the first step in identifying a far more practical, and as a result cheaper way to deal with the real objective of the system: to improve clinical decision making; to reduce the risk of harm and medication errors and to provide more efficient care, speeding up consultations between doctors and patients.

With this in mind, it surely makes more sense to keep information where it is (the Google model). It is after all not only cheaper but far less risky. And in today’s information age, protecting patient data is paramount. Deciding how to share information to key stakeholders when they need it is the next stage.

It’s possible that in 2006 the technology or vision simply did not exist, but in the last four years access, authentication and authorisation (AAA) technology has moved forward significantly. Combine this technology with some well-defined rules and you end up with a model that essentially grants access to specific pieces of (pre-identified) information for a limited period of time. In other words, the system gives a single, one-time ‘view’ of specified information on demand.

This is a system that would provide authorisation to relevant information that support s specific decisions and tasks - such as checking patient allergies and medication before an emergency operation; authorising care after checking insurance details; access an ID system (or otherwise) to verify the patient’s identity; and a view a list of the nearest pharmacies to send a prescription.

It does not grant access to information that is unrelated to the cause of access, thereby already limiting the risk that a single person’s entire medical history could be circulated on the internet.

Network of connected nodes

The model is quite simple and in many ways no different to the days of peer-to-peer networking. It does not permanently store terabytes of data in one big, uncontrollable, network of information, and a single-point of failure. What it does do is create a heterogeneous ‘network’ of smart, cheap, software-based nodes that are simple to deploy into the IT systems that already exist in every hospital, clinic and GP practice.

Each node on the virtual network is controlled by a powerful software-based engine that extracts, quantifies and shares actionable intelligence with people authorised to view that information in order to make critical decisions. Simultaneous queries are possible on multiple systems, including security-related cross-checking.

Risk of error associated with copying, moving and re-keying data from screen or paper is minimised and flexibility to do things such as give  mobile workers such as midwifes and home carers access to relevant information on the move (to further improve operational efficiencies) is also improved.

If money is going to be spent, let it be for something that really does improve patient care in equal proportion to the money that is invested and without the risk that patient data is compromised. There is more than one way to skin a cat. It’s time the NHS looked at different options before it leaps in and realises it doesn’t actually have nine lives.

Comments (10)

Leave Comment
  • 1
    Nate Simpson wrote on 12th Jul 2010

    Quick point, while I agree centralisation is not the way forward, any healthcare profesional may need further information. In your pharmasist example, they would also need any other curent medication or conditions to be abel to propery check a perscription for interactions.

    Report Comment

  • 2
    Barrie Archer wrote on 15th Jul 2010

    Whilst the tenets of this article are interesting (distributed is better than centralised for NHS data) it would contribute a lot more to the debate if it did not depend on a hypothetical "powerful software-based (sic) engine" (located, for example, in every GP's practice) that can interact seamlessly with all existing and future systems and can provide guaranteed service delivery and (distributed) security. Are there existing examples of such beasts, not just in general but close to the parameters required for the NHS?

    Report Comment

  • 3
    Garie Warne wrote on 15th Jul 2010

    The alternative option of a peer-to-peer style network to link together existing systems in NHS and private companies (pharmacies) does appear to have its plus points, but what about those surgeries or pharmacies in very rural areas? How would they get the bandwidth to effectively transfer data 24/7 on request, rather than a single overnight transfer, with the dire state of broadband infrastructure in rural areas?

    I am not saying it is impossible; however without major investment somewhere there will be gaps within the overall data coverage with the peer-to-peer approach.

    Report Comment

  • 4
    john wrote on 20th Jul 2010

    as a data architect i can say the benefits are vast but the security aspects are daunting !

    Report Comment

  • 5
    Ian Wells wrote on 20th Jul 2010

    In answer to Barrie Archer - yes, such software specifically designed for healthcare does indeed exist. I saw one solution demonstrated back in 2003 and understood it was offered to the NHS but rejected. It appears that it has gone from strength to strength since!

    Report Comment

  • 6
    Kay Hughes wrote on 20th Jul 2010

    You say that "in today’s information age, protecting patient data is paramount", and certainly confidentiality can be important, but when I turn up unexpectedly at an A&E department I think that availability of my information to the people who are treating me is perhaps more important. A balance must be struck. Can I rely on a 24/7 service provided by my local GP or would a centrally managed and controlled database be more likely to deliver the vital information in a timely fashion? The Summary Care Records do not (and need not) contain all my medical details.

    Report Comment

  • 7
    Nigel Crawford wrote on 26th Jul 2010

    Even this model is out of date now.
    With my smartphone, already down to £200, I could access my record anywhere in the world, ...if only I had it... as the /Health Foundation/ indicates in its new NHS culture change proposal "The patient will see you now".

    So no need for a 24/7 GP secure service (god forbid!) with the Cloud.

    Report Comment

  • 8
    Simon Evans wrote on 12th Aug 2010

    Any large enough database will contain errors. Early trials of the SCR show that many users do not trust the records and prefer to ask the patient for the information they need anyway. GP practices have downloaded data in bulk without the time, budget or motivation to ensure it is correct. The person best motivated to ensure the data are correct is the patient, but he has no technical knowledge and cannot necessarily understand clinical data. Attempts by the NHS to engage patients to check their own information have met with a resounding yawn. Very few people have even bothered to register, and those that have usually log on once, find the matter beyond them and never revisit the site.

    The answer lies elsewhere. A distributed model will work best for all the reasons given in the article. Above all those using it will have a good reason to ensure it works well. The argument about unconscious patients with drug allergies being admitted to A&E is a red herring. A&E departments seldom if ever encounter this difficulty, mainly because those with allergies carry enough information to enable them to be identified.

    Report Comment

  • 9
    Evan wrote on 18th Jan 2011

    What you are saying does not make sense. The Google model is a massive central farm with spiders pouring over the distributed model. The Google model also has a consistent format - just HTML. The closest you can get to what you are describing, is a massive central farm supporting a service based interface. This ensures that the data is consistent and securely accessed. This data is supplied such that the location can create a middle-tier to interface with their own system. Tbh, I have no idea why a hybrid model represents any problem in our day and age.

    I see this process starting as an administrative requirement to post data centrally. This process is progressively streamlined as locations acquire secure connections, access the basic services and then eventually interface their system. Security is paramount, so you restrict information to each location based on the link, security and registered patients. The security can also be increased in many ways, hardware link/keys, data cleansing (ie no identifying data, this is assumed ot be held).

    Report Comment

  • 10
    James wrote on 19th Jan 2011

    A thought provoking argument that I have heard before but I remain to be convinced that it is realistic on a number of levels.

    1. Google may have a distributed model but this is for performance not to avoid copying data. I acknowledge that Google search results mainly send you to the source of the information but look for the hyperlink 'Cached' on any Google search page to the version of a webpage that Google's spiders have copied while trawling the internet. Google almost certainly stores this in more than one place too.
    2. The author states that "most requests for patient information are task specific, which means that they need to see only a small subset of all the information"... This may or may not be true but the very idea of constructing a set of rules to govern who needs to see what under what circumstances at a task specific level is unrealistic. Getting agreement on the principles behind the rules would be hard enough but coding the information and implementing them in a safe manner would be nightmare.
    3. I would contend that the model proposed would be even more complex and costly to implement than the current Summary Care Record which is far from uncontrollable. Instead of retreiving a single document from a robust and resilient infrastructure, the proposal would require multiple systems to all provide their own 'slice' of information - which may not agree with each other. This would not only introduce additional single points of failure and an element if non-determinism but would also remove the opportunity to resolve conflicting information and present a consistent and up to date summary clinical record.
    4. The proposal is much more complex than it seems and is novel. Safety is paramount as well as Security. When dealing with Safety Related systems - complexity is the enemy and novelty should require a cautious approach. The centralised model may be boring and simplistic but most importantly it works and is easy enough for most people to understand and reason about. I

    Report Comment

Post a comment