The Importance and Applications for AI in Healthcare

Artificial intelligence is becoming more and more popular. There continues to be several arguments on whether artificial intelligence does more good than harm. As this debate persists, what remains constant is that AI is being applied in several industries including the healthcare industry. Applications for AI in Healthcare have continued to grow and spread into various healthcare fields and institutions.

Artificial intelligence in healthcare plays various roles in the healthcare industry. From helping to develop advanced patient population tools to optimizing workflows both within inpatient and outpatient scenarios. With the advancement of Electronic health records (EHR) and artificial intelligence (AI) healthcare organizations and hospitals have started to revise, enhance, and develop new processes within their care system. The use of AI in healthcare has caused healthcare practitioners (doctors, nurses, etc) to change (for better) the way they treat patients, provide care, communicate, store, access and interpret data and function in general within the healthcare framework.

How Intelligent is AI (The Imitation Game)

In 1950, a computer science pioneer names Alan Turing, invented The Turing Test or Imitation Game. This test measure whether machines can be intelligent. The test went like this: if a judge cannot differentiate between a human and a machine (say, through a text-only interaction with both), can the machine trick the judge into thinking that they are the one who is human?

He said:

“I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about [1GB], to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

His predictions may have been incorrect timewise but they have certainly come to pass in recent days. Google’s AI program called Google Duplex was recently launched and it showed that AI robots could call both a hair salon and a restaurant to make reservations as a human being. The AI was able to respond to conversations as a human being would and made sounds like “hmm” “errmmm” to make the other user believe they were actually talking to a human being. There has been backlash that the program is unethical and could be used for malicious purposes and while these concerns are worthy, this advancement by Google just shows how rapidly AI is being developed and in coming decades there will be a plethora of advancements.

Different Applications of AI

Artificial intelligence is being applied in different industries and in different ways. Let’s discuss some of those ways below--

Automated machine learning (AutoML):

The automated machine learning model is creation without programming. Developing machine learning models requires a time-consuming and expert-driven workflow, which includes data preparation, feature selection, model or technique selection, training, and tuning. AutoML aims to automate this workflow using a number of different statistical and deep learning techniques. This is open to the tools being used to democratize AI tools and enable business users to develop their own machine learning models even with basic knowledge. It also helps data scientist create models at a faster pace. Think of ti like apps or plugins for AI.


Digital twin:

The digital twin application is an application that creates virtual replicas beyond industrial applications. This virtual mode application is used to facilitate detailed analysis and monitoring of physical or psychological systems. The concept of the digital twin originated in the industrial world where it has been used widely to analyze and monitor things like windmill farms or industrial systems. Now, using agent-based modeling (computational models for simulating the actions and interactions of autonomous agents) and system dynamics (a computer-aided approach to policy analysis and design), digital twins are being applied to nonphysical objects and processes, including predicting customer behavior. Digital twins help to predict diagnosis and will continue to be used both in physical systems and consumer choice modeling.


Lean and augmented data learning:

This application helps sort through and label huge tons of data otherwise known as Big Data. Using these techniques, we can address a wider variety of problems, especially those with less historical data. The biggest challenge in machine learning is the availability of large volumes of labeled data to train the system. Two broad techniques can help address this: (1) synthesizing new data and (2) transferring a model trained for one task or domain to another. Techniques, such as transfer learning (which is transferring the insights learned from one task/domain to another) or one-shot learning (transfer learning taken to the extreme with learning occurring with just one or no relevant examples)—making them “lean data” learning techniques. Similarly, synthesizing new data through simulations or interpolations helps to obtain more data, thereby augmenting existing data to improve learning.

Probabilistic Programming: Languages to Ease Model Development

This is a high-level programming language that more easily enables a developer to design probability models and then automatically “solve” these models. Probabilistic programming languages have the ability to accommodate the uncertain and incomplete information that is so common in the business domain and are used in deep learning, Probabilistic programming languages make it possible to reuse model libraries, support interactive modeling, and formal verification, and provide the abstraction layer necessary to foster generic, efficient inference in universal model classes.