Artificial Intelligence (AI) is a rapidly growing field of research with a great future. With applications like Vastmindz, which concern all human activities, allow improving the quality of healthcare in particular. Indeed, AI is at the heart of the medical advances of today and the medicine of the future with assisted operations, remote patient monitoring, intelligent prostheses, and personalized treatments thanks to the cross-checking of a growing number of data (big data), etc…
Researchers are developing multiple approaches and techniques from language processing and ontology construction to data mining and machine learning. However, the general public needs to understand how these systems work to know what they do and especially what they don’t do. The omniscient robot, which for many symbolizes AI, is not going to happen tomorrow!
Artificial intelligence was born in the 1950s to have human tasks replicated by machines mimicking human activity. Faced with a multitude of setbacks, two approaches to AI have been formed.
The proponents of so-called strong artificial intelligence aim to design a machine capable of reasoning like a human with the supposed risk of generating a machine superior to man and endowed with its own consciousness. This research path is still explored today, even if many AI researchers believe that reaching such a goal is impossible.
On the other hand, the proponents of so-called weak artificial intelligence use all available technologies to design machines capable of helping humans in their tasks. This field of research relies on many disciplines from computer science to cognitive sciences and mathematics.
This approach – which we’ll be focusing on in this article – generates all the specialized and high-performance systems that populate our environment today: creating profiles of possible friends on social networks, identifying dates in texts, helping doctors make decisions, etc. These systems, which vary greatly in complexity, have a few things in common: they are limited in their ability to adapt.
AI Systems that Utilize Logic in Health
The oldest approach is based on the idea that we reason by applying the rules of logic (deduction, classification, hierarchization…). Systems designed on this principle apply a plethora of different methods based on how they categorize and store facts and relationships: syntactic and linguistic models (automatic language processing) or ontology development (knowledge representation). These models are then used by logical reasoning systems to produce new facts.
Current systems for knowledge management or e-health are more sophisticated. They benefit from better models of reasoning as well as better techniques for describing medical knowledge, patients and medical procedures. The algorithmic mechanics are globally the same, but the description languages are more efficient and the machines are more powerful. They no longer seek to replace the physician but to support him/her in reasoning based on medical knowledge.
Helping in the management of breast cancer
Teams of engineers use a symbolic AI algorithm to help clinicians in the treatment and follow-up of breast cancer patients. These very complex diseases often require improvements to traditional approaches.
The AI platform integrates the recommendations of good practice through the implementation of ontology-based reasoning. The system can also learn from cases that have already been resolved (replication of decisions made for cases similar to the clinical case to be resolved), or from reasoning by experience (reuse of decisions that were not in line with recommendations based on criteria explained in the justification for not following the recommendations). The continuous enrichment of the cases database allows the system’s recommendations to evolve and improve over time.
AI Algorithms Relying on Learning and Health
In contrast to the symbolic approach, the learning approach relies on data. The system looks for regularities and patterns in the available data to extract knowledge without a pre-established rigid model. This method, born with artificial neural networks in the 1980s, is developing today thanks to a massive increase in computational power and the accumulation of massive amounts of data.
Deep learning applications exist in image processing, for example, to identify possible melanomas on skin photos or to detect diabetic retinopathies on retinal images. Their development requires large learning samples: 50,000 images in the case of melanomas, and 128,000 in the case of retinopathies, were needed to train the algorithm to identify the signs of these pathologies. For each of these images, the algorithm is told whether or not it shows signs of pathology. At the end of the training, the algorithm can recognize the disease in new images with excellent accuracy. And we’ve only touched the tip of the iceberg when it comes to machine learning in healthcare.