It is not about the technology but rather how it is used – it seems to be the most crucial issue related to the implementation of AI for health and well-being at the workplace, which the five experts that participated in the Digital Health Society Virtual Summit’s panel agreed to. There are enormous benefits from applying AI-based solutions to monitor workers’ health and prevent accidents or, currently, the spread of COVID-19. Still, there are also considerable risks of misuse (if not even abuse). Those vary from privacy concerns, gender or racial prejudice, or the basic bias based on the low quality of data, data collection with inadequate tools, or the imbalance in power between employer and employees. Such risks should and can be limited to benefit from the opportunities AI solutions bring fully.
And those seem to be endless. According to the recent Deloitte and MedTech Europe report, implementing AI in European healthcare systems could save from 380,000 to 403,000 lives annually or €170.9 to 212.4 billion per year. Moreover, as the paper “Working through COVID-19 and its aftermath” published by Horizon 2020 SmartWork project presented, the AI solutions developed for different scenarios might be particularly useful in the era of pandemic and in the longer perspective.
AI in European healthcare systems could save from 380,000 to 403,000 lives annually
However, the report on occupational safety and health for EU-OSHA reminds us of all kinds of risks. People would prefer AI in the workplace as an on-demand helper rather than as a manager, co-worker, or proactive assistant. If appropriately applied, workers believe that AI could improve safety, help reduce mistakes and limit routine work.
The “if applied properly” mentioned above is the critical differentiator. With the technologies based on personal data – often sensitive ones as health-related data of various kinds – we need to ensure that the solutions are safe, secure, follow the legal standards when it comes to privacy, and at the same time put the individual’s control over their data, and their well-being in the centre. According to the experts participating in the panel at DHSS, in practice, this means:
- Developing ethical standards and policy frameworks to build trust and foster the adoption of AI in healthcare in conjunction with the working environment;
- Securing access to the high-quality data by building data policies and infrastructure to foster access and interoperability of the harmonized data;
- Respecting the employees’ rights to privacy and confidentiality by making sure the data is collected correctly and used meaningfully and in compliance with their fundamental human rights and informed consent;
- Improving explain ability and accountability and digital literacy among all stakeholders involved, from the decision-makers to the employers to the employees.
This is a call for a holistic approach to AI that goes beyond technology only and includes ethical implementation, user-centric, cross-sectoral policies, and limiting the risks to capitalize on the benefits of the new technologies.
Authors: Karolina Mackiewicz, Carina Dantas, European Connected Health Alliance.
The panel discussion took place at the Digital Health Society Summit on 18 November. It was hosted by the ECHAlliance and Horizon 2020 SmartWork project, which develops a suite of smart services, building a Worker-Centric AI System for workability sustainability. The panel was moderated by Karolina Mackiewicz, Senior International Projects Manager at ECHAlliance.