Introduction to Artificial Intelligence

Radiologist’s Intro to Machine Learning — 10 Part Series


Artificial intelligence (AI) has been a long discussed topic ever since programmable computers were developed. Academics and philosophers questioned the differences between man and machine. Could we program the human brain with all of its’ intricacies into a computer? Will a computer then be able to think?

To date, we still have yet to answer these interesting, mind-numbing questions, but we have come closer to making computers smarter. Though, some may argue, even the smartest computers still have less intelligence than a cockroach. Think about that for a little bit.

The smartest computers still can’t do a bunch of tasks at once. Instead, they are very good at doing the one task they are programmed to do.

Before we dig any further, let’s define some key terms. We chose one of many definitions that are available online.


The first three are hierarchical; AI is the largest, overarching category. Machine Learning (ML) is a subset of AI and Deep Learning (DL) a subset of ML.

Artificial intelligence — A computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Machine learning — Arthur Samuel said “Machine Learning is the ability to learn without being explicitly programmed.”

Deep learning — From MIT News: Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. This is similar to synaptic connections of axons and dendrites.

Image recognition — Using machine and deep learning techniques to identify contents within an image.

Architecture — The scaffolding and blueprints of the algorithm model used to predict an outcome.

These words will come in handy for the next couple of articles as well! Do some google searches on them and keep an eye out for them when we introduce common machine learning algorithms in the next article.


Why now?

As stated earlier, machine learning and artificial concepts are not new. In fact, they are decades old. However, several factors have recently changed that have now significantly contributed to advances in these fields. These are important to remember because this marks a unique point in history.

First, the computational speed of technology is rapidly advancing. Hardware that goes by the name of GPUs (graphical processing units) have allowed computations to be parallelized. That is, more calculations can be done together at the same time instead of one after the other. This allows for huge gain in efficiency. We can thank the video gaming industry for these advances.

Second, there have been significant advances in algorithms. The deep learning frameworks or architectures have improved from the likes of Google, Facebook, the research community, and emerging individuals in the open-source communities. For example, a class of algorithms that are widely used today fall into neural networks. These algorithms are loosely modeled after the brain where information is passed between different layers of neurons through the network. Over time, the algorithms became more complex, ranging from a few layers to tens to potentially hundreds of layers. This added complexity allows for interesting interactions between variables that we may or may not have thought as important.

Last, but not least, is the exponential increase in data available within industries, the web, and businesses. This area has developed for years and will continue to. Whether it be the influx of social data, the number of images on the internet, or your purchases on Amazon, data is ever-present and will continue to serve as the starting point for many of these machine learning algorithms.


All of this may sound overwhelming, and the scope is certainly enormous. Each of these variables which contribute to the emergence of machine learning and artificial intelligence can be teased out in further detail, sure. However, what matters is to understand that these pieces were the ingredients to adoption of algorithms to look through the data to find stories.

The story we are talking about through this series is the story of how machine learning can change the way radiologists do their work. However, this change will take time, understanding and communication.


With all of this jargon, it is easy to get discouraged. Don’t worry. We are all on this journey to learn more about how technology and the current state of radiology (and other fields) will change.

Back to: Machine Learning