Programma

Interview

“The risks of basing digital health strategy on industry hype and alluring prototypes”

The World Health Organisation recently hosted a landmark “Symposium on The Future of Digital Health” (Copenhagen, February 6-8th 2019), which was attended by healthcare leaders, innovators and analysts from across the European Region. Professor Claudia Pagliari, Director of Global eHealth at the University of Edinburgh, spoke in a plenary session entitled “Leaders of the Future – political, economic and ethical governance for digital health”. In a follow up interview for ICT&health, Artur Olesch asked about some of the key points raised in the talk and sought further insights and views on the theme of making digital health count.

Tags

Member editorial board ICT&health

Share this article

Many national healthcare digitalization strategies fail. What are the main reasons?

Failure in large-scale digital programmes is common, alas, and while this also affects corporations, governments have particular track record, or at least can appear to, since they are accountable to tax payers and therefore attract more scrutiny. There are many reasons for these failures, but often they boil down to politically-driven priorities and deadlines (e.g. a leadership commitment to “transform” a major service by a certain date), unrealistic delivery schedules (often based on election or spending cycles), inadequate implementation budgets (reflecting a failure to think-through the complex human and organisational change issues required), flawed procurement and contracts (e.g. omission of customisation and ongoing maintenance costs), lack of user-centred design (compromising IT usability or workflow fit), workforce gaps (vulnerabilities in local knowledge or support), and insufficient public engagement (particularly in projects involving patient data sharing). Lack of interoperability and a failure to employ whole systems thinking can also leave even superb innovations stranded on islands of excellence.

During the WHO conference in Copenhagen you mentioned a few key success factors of effective governance of national digitalisation strategy. Among them: connected people, connected strategies, evidence-based decision making, constructive cynicism and accountability. Could you please briefly describe them?

Governments and healthcare systems are complicated beasts, operating on a massive scale, at high pressure and with multiple demands. People are constantly busy, budgets need to be managed, services needs to be delivered. Clinical, computing and administrative remits are split. Care is organised by specialties. These and other factors encourage silo-working, as stakeholders at all levels struggle to deliver on their core objectives.

Enlightened governments have begun to recognise the value of greater strategic alignment; for example, many are integrating health and social care strategies, budgets and services to cope with the challenges of an ageing society. In a similar way, digital health is creating strategic co-dependencies between technological innovation and healthcare. With the digitisation of other government services, such as education, transport and the environment, broader connected strategies offer even more opportunities to make better use of public money for the benefit of citizens, although overcoming traditional territories will be a challenge.

Using public money to buy promises is irresponsible government

The rapid pace of digital change and growing service demands have also heightened the need for interdisciplinary expertise and work practices. Connecting people in a way that allows them to benefit from knowledge in different areas is vital for articulating important co-dependencies, different ways of working and mutual goals. Our research on transformational change programmes points to misalignments in these understandings as a key reason why digital projects fail and why it is important to overcome them early in the innovation lifecycle. Building this lateral thinking is a key aim of our professional learning programmes, including the NHS Digital Academy and the Masters in Global eHealth, which bring together doctors, nurses, civil servants, innovators and computing professionals involved in designing and delivering health and social care informatics, supported by expert tutors from different backgrounds.

In my talk at the WHO, I referred to the risks of basing digital health strategy and procurement on industry hype and alluring prototypes. This is a common temptation in both the public and private sectors when trying to embrace innovation and be ‘ahead of the curve’. Although there are risks involved in all new ventures, using public money to buy promises is irresponsible government, while Silicon Valley’s call to “move fast and break things” may be inappropriate in the safety critical context of healthcare. Digital health leaders need to engage in constructive cynicism – being open to innovation whilst also asking tough questions about what is being offered and what evidence there is of its likely impacts. Evidence-based decision-making goes hand in hand with this need to be critical. It is somewhat ironic that in the healthcare sector, where evidence-based medicine has become the norm, we so often fail to do evidence-based policymaking or procurement. Knowing how to acquire, judge and use evidence is vital for healthcare leaders but few have the training or skills to be able to do this well. In the presentation, I discussed the need for evaluation to be integrated throughout the innovation lifecycle, helping to strengthen quality through cycles of insight building and iteration, as well as informing difficult decisions about disinvestment. I also pointed to recent guidelines for evaluating digital health from the UK’s National Institute of Health and Care Excellence and recommended partnering with academics to develop an evaluation strategy before embarking on expensive new projects and programmes.

By accountability, I was referring to several related issues. One is the public accountability of health leaders in an era where finances are already being stretched to the maximum. Linked to the point we’ve just discussed, it is important to place the onus on technology vendors and commissioners to provide evidence of the benefits of their products or services before spending money on them. At the very least, this requires being able to explain the theorised pathways through which a new digital platform, tool or service is expected to influence outcomes, backed by previous evidence and decent modelling, as well as having clear plans for audit and evaluation. Accountability also relates to issues of institutional governance and professional ethics, which I’ll discuss in answer to your next question.

You also emphasized the importance of “responsible and ethical innovation”. On what principles should be this be based in digital health?

Many relevant frameworks exist in medical ethics, bioethics and now digital and data ethics, while the broader Framework for Responsible Research and Innovation advocated by the European Commission is gaining traction. None are entirely comprehensive, as situations differ across contexts and technology types; for example, using hardware or software as a medical device, linking pseudonymised data for population-based research, deploying patient records in clinical care, or applying algorithms and AI to assess risk or generate recommendations, to name but a few. These also overlap with laws and regulations around data protection and medical device safety, as well as issues like human rights and avoidance of fraud. Detailing all of these would take more time than we have in this piece, but there are some key principles that can be summarised as questions for digital health:

  • Drawing from biomedical ethics – is this digital intervention or use of data in the patient’s interest; has consent been given; am I avoiding harm, is it fair to this and other patients?
  • Following principles of good governance – are the purposes transparent, are the data users accountable, has there been sufficient participation from data subjects or consumers; have conflicts of interest been avoided?
  • Drawing on ethics in health data research – can we demonstrate that this initiative is trusted and trustworthy, that we are using only the minimum data required to answer the question; that consent and withdrawal options have been provided for identifiable data and evidence of assent for anonymised data; is the balance of benefits for data subjects and data users fair and reciprocal?
  • From ethics in software development – has privacy-by-design been built into the interfaces or data management systems; have the clinical algorithms been shown to be accurate, safe and unbiased?
  • From a social equity perspective – is this likely to reduce, perpetuate or increase the digital health divide?
  • From a human rights perspective – is this compromising the right to privacy, autonomy or fair access?
  • In terms of responsible innovation – does it align with society’s values, needs and expectations; have the future implications of these innovations been considered to avoid issues like discrimination, inequality or harm; are helpful interventions scalable, fairly applied and sustainable?

This is a non-exhaustive list, of course, and the different principles and frameworks overlap.

In my talk, I also drew leaders’ attention to the importance of avoiding conflicts of interest in digital health, or even the appearance of them, when engaging with industry. This is a serious problem in digital health and the ‘revolving door’ between governments and corporations is well recognised. Mixing public service and private profit is to be avoided, as is selling patient data to industry except under the strictest conditions of anonymity and with a clear public benefit.

Regulations are often far behind technologies: governments try to set standards when many technologies have been already adopted on the market. Interoperability has become the biggest challenge. How can we tackle these issues?

Firstly, let’s consider interoperability. Although things are improving, problems with incompatible software, hardware and data (structure, vocabulary, clinical coding) continue to hold back the vision of a connected, efficient, effective digital health ecosystem. Many government strategies over the years have included a commitment to interoperability, or even mandated it as a condition of procurement, yet successive policy documents continue to present this as an aspiration rather than a reality. One of the key challenges is aligning strategy with enforcement, particularly when new and exciting tools come along. It is also important to recognise the scale and complexity of the existing digital landscape within health organisations, where incompatible legacy systems can take years to replace (ironically the greatest problem for ‘early adopter’ countries). Requirements for data portability and sharing under GDPR are creating momentum, while evolving standards like FHIR offer a glimpse of consensus, but in the meantime innovations such open APIs and natural language processing will be needed to bridge the gap.

Make sure the value proposition is sound

Longstanding issues of territory and ownership are also relevant to interoperability in a market where powerful vendors have traditionally been able to lock clients into their portfolio offerings; indeed, most government IT is still procured from a handful of companies. Creating a climate that is receptive to independent providers has helped in many countries, but with the emergence of ‘as a service’ software, infrastructure and data hosting, along with the expansion of enterprise systems, traditional centralising forces are now taking new forms. The growing risk of cyber-attacks, competing operational demands and technology recruitment difficulties in the health sector make a compelling case for outsourcing IT, and within a modular vendor-led ecosystem achieving interoperability is a realistic prospect. However, there are significant platform monopoly issues which have yet to be resolved, as well as unanswered questions about data guardianship, which need further scrutiny.

Coming back to the issue of connectedness, over the last few years – working with organisations such as the European Federation for Medical Informatics – I have also been promoting the concept of Social Interoperability, recognising that technical and regulatory harmonisation also require changes in working practices, professional communication and cultures.

In terms of regulation, keeping up with the pace of innovation is certainly a challenge. For example, while GDPR has helped to create generic principles around data protection, the regulation of consumer privacy is still uncertain. Similarly, while ‘software as a medical device’ is now subject to regulation, crossover innovations like digital implants and new types of impenetrable AI can make this difficult to enforce. Likewise, data protection regulations covering mobile health apps and IoT may fail to capture downstream privacy breaches caused by third party data sharing and jigsaw reidentification. What has been learned is that industry self-regulation is a blunt instrument, as we found recently in our studies of consumer genetic testing services sold online, which crosses several regulatory boundaries, leaving ethical gaps which can be exploited.

On the up-side, digital developments are beginning to offer solutions for ethical governance; for example, using blockchain to support regulatory compliance through smart contracts, apps for spotting medication fraud, or AI bots for crawling privacy terms and conditions.

In digitalisation of healthcare many authorities choose the model of public-private cooperation. On what principles should it be based?

There is no one right or best model in this context. As already noted, there are often sensible and pragmatic reasons for devolving digital responsibilities to companies specialising in data management and cybersecurity, and innovations developed by the tech sector can help to support public health services and patient self-care. However, as I’ve already noted, making sure money is spent wisely, ethically and in the interest of patients is essential, as is transparency, accountability and avoiding conflicts of interest. Ensuring that health systems have the best legal expertise available to them when negotiating contracts is also vital. Not only can this help to avoid clauses or omissions that later bring unexpected costs, it is also needed for clarifying issues around intellectual property (IP), which are often overlooked. In digital health it is important to establish whether you are being sold an existing solution or invited to collaborate in creating one. Health organisations often discover the latter after many hours of staff time and knowledge have been invested, yet typically receive no financial return once the product is commercialised and sold elsewhere. To ensure sustainable public-sector health services, transparent and reciprocal IP models are needed, as well as appropriate training for staff involved in the procurement process.

What does “sustainable digitalisation of healthcare” mean for you?

Sustainability is about making the best use of digital innovations for delivering high quality, person-centred, evidence-based care to a growing number of patients, whilst also containing costs. This calls for whole systems thinking – recognising the value of collaboration across and within sectors, avoiding duplication when care processes have already been optimised, using data to understand and control waste, using technology to engage patients as partners, and being cautious about expensive ‘cutting edge’ innovations when proven, frugal ones exist. For example, in my talk I referred to reported benefits arising from changes to surgeons’ rotas, compared with investing in expensive surgical robots. Similarly, cheap innovations like text messaging may be as effective as expensive ones like wearables for promoting medication compliance or health behaviour change. Sustainability is also about recognising that the global health workforce cannot grow at the same rate as demand, so being smart and strategic in the use of digital innovations is essential, which requires us to gain a better understanding of users’ needs, preferences and behaviours.

Machine learning and artificial intelligence have potential to support sustainability through automating tasks such as image screening or administrative processes, as well as using data and algorithms to compute risks and offer tailored recommendations. This might, for example, help to decrease unnecessary drug use. At the same time, the area is arguably over-hyped and remains fraught with ethical tensions and uncertainties over algorithmic transparency, human-robot working, and patient rights, which still need to be resolved. Using technology to deliver services out of hours also has potential to cut waiting times and triage care, while supporting older people to live well in their own homes has both economic and societal benefits.

Let’s talk about some case studies. Which components of the digitalisation in the NHS do you evaluate positively and which ones – negatively? What could be improved?

This is an enormous area and it would be impossible to comment on all of the digital projects and programmes that are underway across the UK. It’s also important to recognise that the opportunities and challenges facing the NHS are common to many countries.

From a contextual perspective, the UK benefits from a single-provider health system, presenting favourable conditions for implementing digital strategies and innovations, although whole-system interoperability has proven elusive, for some of the reasons I’ve already mentioned. Positive recent developments include the rollout of personal health records and teleconsulting in parts of the NHS, which are improving patient empowerment, involvement, choice, and access to care; the new NHS App library, which provides a gateway to trustworthy consumer health tools, and the launch of the NHS app, allowing patients to access digital services from a smartphone. Recent moves to bring together digital leaders through initiatives such as the NHS Digital Academy and the establishment of NHSX also hold promise for addressing some of the issues discussed already around workforce and connected people. Large government investments in health data research have also placed the UK at an advantage, particularly in regions with a long history of using unique patient identifiers for record linkage, such as Scotland, and major programmes in genomics and artificial intelligence are now seeking to make better use of these data for biomedical innovation. Negative cases typically involve failures in large scale IT programmes and procurements, controversies over the governance of patient data sharing and major cyber-incidents, such as WannaCry, although debates over AI and chatbots are beginning to occupy the media.

Rather than trying to list ‘good’ and ‘bad’ projects, it may be more helpful to return to the question of what challenges digital health initiatives can face, using examples from both the UK and elsewhere.

Scale and Complexity: The NHS National Programme for IT (2005-13) is arguably the poster child for negative digital health experiences. This was a hugely ambitious project which aimed to develop and centralise NHS IT in collaboration with selected suppliers. The key challenges were the sheer scale and expected speed of the programme, which involved multiple changes to systems, people and processes, only a fraction of which had been anticipated. It faced resistance from frontline health workers, huge over-runs in delivery times and costs, expensive legal disputes with suppliers, interoperability challenges, media attacks, public fears over data sharing and much more, and was eventually shelved, at an estimated cost of £10bn. The unrealistic timescales and budgets mentioned earlier in this interview were partly to blame, although the failure has also been attributed to top-down leadership and lack of inclusive design.

There are signs of growing maturity in the digitalisation strategies and capabilities in many European countries

Contracts and Procurement: In Australia, the implementation of the Queensland public sector payroll system (2006-13), which includes the health sector, is an excellent example of the importance of effective contract management. The project was hit by huge cost overruns and delays, escalating an original budget of $6M to an eye watering $1.2 billion. The government attempted to sue the suppliers for misrepresenting their ability to deliver on time, but a clause in the contract absolved them of responsibility and the state was, paradoxically, forced to pay compensation. This has been described as the worst public administration failure in the country’s history.

Public Consultation and Ethical Governance: Back in the UK, the care.data project (2013-16), which aimed to create a centralised database of healthcare records for secondary uses, was axed following media allegations that patient information could be sold to private companies and concerns over privacy and choice. More recently, public anxiety was raised when it was discovered that a London hospital had allowed Google’s DeepMind to access over a million non-consented patient records to develop an app, leading to a rebuke from the UK Information Commissioner.  Cases like this show the importance of trust and trustworthiness when planning future uses of health data for research, government analytics and innovation, and how vital it is to fully consult with the public and take account of their concerns. Cultural issues may also play a role; for example, Estonia’s e-Health record initiative bears similarities to care.data but has encountered little public resistance, possibly due to high levels of trust in government and the explicit role of the patient as a data user, also illustrating the importance of reciprocity.

Translating research and innovation to impact: Billions in European Commission research and development funding has been committed to digital health and care projects in the last ten years. Although this has produced some valuable scientific insights and exciting innovations, it is an open secret that very few of these have translated to substantive changes in the quality or cost-effectiveness of healthcare. Progress is at best incremental and while there have been some cumulative benefits – for example in informing digital service integration and ‘healthy aging’ strategies, attributing these to particular funding streams is challenging and probably demands new methodologies for assessing impact. Most funded projects cease after grant funding has ended, pointing to a stark gap between sponsorship of science and innovation and sponsorship of healthcare services, once again raising the need for connected strategies, as mentioned earlier.

It’s important to remember that successful digital innovations in the public sector rarely get the same attention as costly or controversial ones. Nevertheless, across governments and healthcare systems there has been a widespread failure to learn from past mistakes, which needs to be corrected if we are to move forward effectively, sustainably and ethically. This having been said, there are signs of growing maturity in the digitalisation strategies and capabilities in many European countries, which may not be obvious from the slow pace of technological change. Examples are the development of more robust guidelines for digital design and procurement in the NHS, which strengthen the foundations for further progress.

Let’s wrap up: what should a roadmap for the digitalization of national health systems look like?

 There has been a long history of digital road maps in Europe and around the world, with new ones appearing every year. There are common threads in all of them, some incremental, some disruptive, and some simply repetitive. Achieving interoperability continues to be a dominant theme, decades after first making it to the top of the health IT agenda in many countries. Telemedicine has regained momentum, with better digital communications and a stronger evidence-base, after a period of unfulfilled promise. Giving patients access to their health records is finally becoming standard practice, despite having been possible in principle for many years. Personalised, predictive and genetic medicine are starting to turn the corner from research to practice, after a long phase of hype and unfulfilled expectations. Likewise, robots and chatbots are gradually starting to normalise and scale, although we are still some way towards achieving safe use of these technologies in most areas. Artificial intelligence is a dominant theme in recent roadmaps, with varying degrees of realism about when this will yield benefits. And the concept of using data and analytics to enable the ‘learning health system’ continues to feature but has proven difficult to shift from theory to reality. A broader desire to see digital service development and integration as enablers of citizen and patient health is also evident, as in NHS England’s “Empower the Person: Roadmap for digital health and care services”. In a similar vein, many countries are seeing the value of big data for understanding the social determinants of health, to inform policies that can benefit populations and communities. Alongside this is a growing awareness of the cyber-threats facing health systems, although investment in the human and technological resources needed to combat this still falls far short of the challenge.

Meanwhile, governments continue to wrestle with questions over digital centralisation versus distribution and about the balance of technologies developed and managed by the health service or by the private sector. Spectacular failures involving home-grown IT, coupled with the drip-fed mantra that ‘big tech knows best’, have arguably disempowered the public sector by creating insecurity and decreasing investment in in-house skills and infrastructure, thus increasing its dependency on commercial suppliers. How this relationship pans out in the platform economy remains to be seen, but now is the time to strengthen the expertise needed to recognise the implications and ensure that investments are strategically sound, technologically robust, evidence-informed and legally watertight.

Returning to the need for connected strategies, it is useful to look beyond the obvious innovations in digital health, and consider what else needs to change for sustainable, responsible, equitable healthcare. This includes innovations in workforce optimisation, identity management, fraud prevention, cybersecurity, tackling online health risks and many other areas. For example, our research on human resource information systems in healthcare suggests their considerable potential for cost savings and care quality improvements.

It is important to remember the different starting positions of countries in Europe. While a degree of leapfrogging may be possible in regions where digital health is still a fairly new concept, entrenched organisational structures and cultures can create unfavourable conditions for innovation. For example, countries like the UK and Denmark, where primary care computing is well established, are considerably ahead of countries like Germany and Austria, where negative attitudes to data sharing and strongly hierarchical health systems prevail. The balance of public to private healthcare spending also affects the value proposition of personal health technologies in different regions, which are likely to affect their uptake.

Europeans are now in an excellent position to harness the power of digital to improve and sustain their public health systems, providing high quality care for all citizens despite the pressures of an ageing population. The stakes are high, however, and making wise choices about where to invest for the best value and impact can challenging in an area which often promises much but fails to deliver. Critical, connected, evidence-informed and accountable leadership will help us to move forward confidently, cost-effectively and responsibly.

Thank you for your time.

 

Video and slides from Claudia Pagliari’s presentation “Making Digital Health Count – Some Thoughts for Leaders in Government”, The WHO Symposium on the Future of Digital Health, Copenhagen, Feb 6th, 2019.

Professor Claudia Pagliari is a senior lecturer and researcher within the Usher Institute of Population Health Sciences and Informatics at the University of Edinburgh, where she directs the eHealth Interdisciplinary Research Group and the MSc in Global eHealth. She holds a first class degree in Psychology from the University of Ulster, a PhD in Psychology from the University of Edinburgh and was elected Fellow of the Royal College of Physicians of Edinburgh in 2012. She is a member of the UK College of Experts in Health Informatics, the British Computer Society and the NHS Digital Academy (theme leader) and has held advisory roles with the American Health Information Management Association, the European Commission (scientific expert) and other agencies.

Tags

Member editorial board ICT&health

Share this article

Don't miss the most exciting developments