David Evans looks at the excruciating dilemma in a broken data system.

There is nothing unusual about a data sharing agreement using personal data for research, it is a routine occurrence. However, because it is Google, because the data is so large and so comprehensive, news editors are not treating this as routine. This highlights the fundamental problems that could stop us from realising the benefits of data for our health and wellbeing.

1) We don’t have any concept of what’s happening to our data on a routine basis.

Every time we tap our smart phone, buy something, or visit a hospital, we are creating an overwhelming data footprint that travels all over the world with varying degrees of control and oversight. The BBC is quoting a spokesperson as saying that this is “one of 1,500” data sharing agreements. I’m not sure if knowing that will calm or alarm people, but it certainly highlights that few individuals in the country will have any conceptual basis to understand the scale and breadth of data sharing already taking place.

2) Trust and trustworthiness are all messed up

Yet NHS information governance is pretty developed, and Google is an organisation with vast expertise on the nuance of privacy and permissions and keeping data secure - and given they want to do more of this, they’ve a lot at stake if they screw this up. So, rationally, this is perhaps less of an issue than the other 1,499 data sharing agreements (and yes, I know 1,500 was probably an approximation) in the NHS, and what our smartphones get up to minute by minute.

It doesn’t work like that though - we’re heuristically developing a fear of corporate and governmental data usage. The effect of past transgressions is cumulative, and of doing the right thing close to zero. At the BCS and HIMSS hosted UK eHealth Week, Dame Fiona Caldicott (under EU referendum purdah) gave a clear indication that creating trust will be at the heart of her review’s recommendations and that the review had achieved a  single, simple model for patients to use to determine the use of their data beyond their own personal care. That may well reflect progress, but under current models it is a practical impossibility for anyone to assure that ‘the NHS can be trusted with our data’ because we see intentions not assured results. The rules may be good, and the people running them well-intentioned, but neither guarantees the outcome. So the net effect is that we trust those we shouldn’t (and are harmed), and we fail to trust those we should (and lose the benefits) because we simply do not know who is who. It’s about saving lives!

Just this little bit of research on kidney conditions could save lives and prevent harm to people. No guarantees, but it is a rational expectation that using the latest large-scale data techniques on rich data sets will lead to real, life-saving, life-transforming benefits. It is rational because a) we understand the mechanism and b) there is a long track record of benefits from things like the Clinical Practice Research Datalink. It’s difficult to connect with because it is about future potential, and abstract, but gets very specific and real if it is you or someone you care about who is going to die, and aggregated personal data could save their life.

3) It’s only going to get more acute.

If we’re really serious about getting these benefits, then we need to recognise that the NHS only holds a tiny fraction of the useful data. The more dimensions we can add to the data, the more powerful it becomes. Imagine if the tiny little window into one fraction of our lives that is our NHS interaction was incorporated into our search engine history, our emails, our shopping and eating habits, our movement / exercise / vital signs data. It would only take data from a handful of corporations to provide that right here, right now, and we could be talking about finding the earliest of early warning signs of health and wellbeing issues. Beyond the here and now, the explosion of connected sensors and computing that is known as the Internet of Things (IoT) is going to increase the scale and richness of that data even further beyond our comprehension. A golden age of healthcare could be the result.

Yet a dark grey future of statist, corporatist 1984 style misery is what some fear; not an irrational position, just pessimistic (or, as they would have it, realistic). The drivers that impact large corporations and governments are not always perfectly co-located with what is good for individuals. If knowledge is power, then locating too much power away from the individual is a historically-verifiable mistake. This data can be used to manipulate, harm, control or undermine us as well as help, and at the very least leave us feeling powerless in the digital age.

Putting all this together: Lives are at stake, to some degree the future of our social wellbeing is also at stake, and things are only getting more complicated and difficult. The system is not functioning. We’re being forced to make a horrible choice between our personal control and our personal health, and for the most part we have no idea what’s at stake or who is making the decision on our behalf. So…

4) We need a better way

I want to have my cake and eat it. I don’t want my kids to die of kidney failure when I could have prevented it, but I want them to have freedom and control over their lives as well (a major driver of wellbeing). I want them to live and be powerful. I don’t think anyone wants anything else - and research into public attitudes supports this, 43% of people asked in the Digital Catapult Trust In Personal Data survey said they are willing to share if it was used to improve society and 89% asked in a BCS/yougov online survey said that they think they should have more control of their personal data. I do think we’ve resigned ourselves to a level of trade-off that is decidedly un-digital. Our network of experts tells us that there are better ways to manage our data that can deliver both the large scale benefits and the individual control - at least, deliver more of both than our current trajectory - but they require vision and collaboration, and are not yet widely understood or listened to because it is in the ‘too difficult’ box rather than the ‘too important’ box. We need to start designing health data models that are not based on an incremental move forward that will undermine trust and diminish in its returns, but a re-imagining of what good looks like in the future.

So my challenge to those working in health data - and personal data everywhere - is not to give up the short term benefits, but to work together for a longer term that we’d actually like to live in. This is not a new struggle; the leaders in BCS in the 1970s were increasingly concerned about computing and personal data, and by helping to bring about the first legislative moves to protect our data they begun a process we haven’t yet completed. That’s why we are continuing to carry that torch through our personal data challenge. Let’s keep going; keep striving for the full benefit, the minimum harm, a world we’d be happy for our children to inherit, just as generations before us have done.