AI is everywhere, and now it can hear if you’re suffering from Alzheimer’s

28 August 2017
News
Canadian WinterLight Labs has developed a novel AI technology that can quickly and accurately quantify speech and language patterns to help detect and monitor cognitive and mental diseases, such as Alzheimer’s disease. Just 45 seconds in the company of scientist Frank Rudzicz and his machines is all it takes to determine whether or not you are suffering from Alzheimer’s, the British newspaper writes.

In that short timespan the complex Artificial Intelligence (AI) based algorithms developed by 37-year-old Rudzicz and his team are able to pick apart someones voice and predict the severity of the disease to an accuracy of at the moment around 82 per cent. This percentage will probably rise as the algorithms used learn from their mistakes.

How the algorithms in question do this?

  • First, the actual use of language. Alzheimer’s sufferers tend to leave longer pauses between words, prefer pronouns to nouns (for example, saying “she” rather than a person’s name) and give more simplistic descriptions, such as a “car” rather than the model or make.
  • Second: the “jittter and shimmer” of speech; variations in frequency and amplitude. “These are very difficult for the human ear to pick up but the computer is objective and completely quantifiable,” he says.

Rudzicz is co-founder of WinterLight Labs (Toronto) in the Mars Discovery District: a cluster of downtown buildings run by a public-private partnership where according to Yahoo some of the most ground-breaking AI research in the world is taking place, and from where the Telegraph is reporting for a three-part series on the technologies already changing peoples lives - hopefully for the better. The newspaper also refers to the memorandum signed by over a 160 tech entrepeneurs concerning grave doubts about the use of autonomous weapon platforms.

Privacy concerns

Apart from the potential danger of AI based platforms, thinking for themselves about who to kill, there is the more practical question of privacy. Rudzicz, also an assistant professor in computer science at the University of Toronto, admits to the complex regulatory issues regarding the extent to which AI machines should be used to diagnose patients.

Currently, his models are being piloted in the largest network of retirement homes in North America, as well as among elderly patients in Edinburgh and Nice, to collect data and train the machines to understand different languages and accents. At present, they are only being used only to map cognitive decline within existing patients rather than actually diagnosis new ones.

More initiatives

Next to the 45-second test which studies 400 different variables of speech, Rudzicz has built a small robot named Ludwig, that runs on so-called machine learning algorithms which recognise data and make predictions. These algorithms enable the robot to engage patients in conversation and assess speech patterns to determine their health. As well as testing for memory and speech impairment, such technology can even predict emotions – and whether or not a patient is at risk of an imminent bout of anxiety or depression.