Aleksandra:
Welcome my digital pathology people. I am talking to you from a car, because I’m going back home from a conference. I just gave a talk. Today we don’t have any guests. It’s going to be the talk I just gave. It’s about artificial intelligence in digital pathology. So enjoy.
Intro:
Learn about the newest digital pathology trends in science and industry. Meet the most interesting people in the niche and gain insights relevant to your own projects. Here is where pathology meets computer science. You are listening to the Digital Pathology Podcast with your host, Dr. Aleksandra Zuraw.
Aleksandra:
Good afternoon. Raise your hand if you can hear me. Fantastic. Okay. I barely made it. I had to switch with one of the presenters. So thank you so much for the patience and coming to my talk about AI in digital pathology. Let me tell you who I am first before we dive in. I’m a veterinary pathologist, as you probably may know from my slide, but I’m not a diagnostician. The diagnostics part that’s going to be at the end, I had to do some research for you, but there is not too much. I work for a drug development support. I work for Charles River Laboratories, which is supporting all the pharma companies with the non-clinical studies. And you have a bunch of veterinary pathologists working there in the industry and I’m one of them. And everything that I do, I have a blog about digital pathology and obviously artificial intelligence is a big part of what’s happening in digital pathology.
And I just consulted with Lynn here, my host, that many of you are radiational oncologists. You do interact with pathologists, which is great. So we’re going to talk about what digital pathology is, a little bit about slide digitization, tissue image analysis, AI and tissue image analysis. And I hope I’m not going to repeat what my previous speakers talked to you about artificial intelligence. And then we’re going to dive a little bit into the applications. So what is digital pathology? Well, it’s a little bit similar to digital radiology. You have the microscope, you don’t have microscope in radiology, but basically you have what we use for pathology, the microscope and we have digital photography. So the fancy field of digital pathology is actually this at its core. And if you’re in private practice, you can basically use a digital camera or a phone and your microscope and you can do digital pathology, because you can do slide digitization.
You can basically do slide digitization with your phone through the microscope. And these are the static images that you can use for consultation. You can use them, actually, you can even use them for consulting a pathologist. Recently, a friend of mine, Kate Baker, if anybody follows the veterinary psychology, she has a huge Facebook group. She developed an app for static digital pathology where you can download the app, use your phone and do digital pathology from the practice. But obviously there is the other side of digital pathology, which is being done with scanners where you digitize slides and you have this zoomable experience, the digital microscope experience where basically when you replace your microscope, you still have to make the slides. Unfortunately we’re not as cool as radiologists. We still have to have our analog modality and then we take a special type of picture of it that is zoomable.
So this is where the digitization takes place in the scanner. And we have image acquisition there. So basically taking the picture, the scanning, we store those images in this machine or in the computer that is belonging to this machine. We edit them. Why do you need to edit them? They are taken at different magnifications. And to make them into the zoomable image, you have to put all the pieces of those different magnifications together. They have to match. And then you can zoom in and zoom out. And this is the editing. And then you have to look at them, you have to view them. So there is that display option and display necessity. And also the viewing software necessity. So there is a little bit more complexity than with the static images, just phone and send it with an app or without. But those kind of components, even though they seem a little bit complex, they have a light source inside.
And there is also a slide stage. There are objective lenses and in this case a high resolution camera. Does this remind you of a certain piece of equipment that you probably are using? Raise your hand if yes. If the answer is yes, and yes it does remind of the microscope, because like microscopy, at the core of digital pathology, what I showed you on the first slide, it’s basically like microscopy and digital pictures in a special way. And our pictures are bigger than the radiology pictures. So not only can’t we digitize immediately without making glass lights, they’re huge and they are difficult to transfer. And there are many IT logistical problems with that. And then I was giving a talk at one conference and somebody asked me, “But why are they so huge?” And that’s why I always bring this slide to the conference, because that day I forgot the answer.
And somebody from the audience was nice enough to say, because we have the pyramid, the magnification pyramid. So I basically told you about this zoomable experience where you have to take pictures at multiple levels, put them all together, and that makes it so much bigger than a radiology image. Those images are as big as a 40 x magnification, 40 x lens. A full tissue slide can be as big as a two hour HD movie. So if we have to send 10 of them, 10 movies via email, that doesn’t work too well. They’re big. But the fact that we can digitize the whole slide gives us the possibility and the capability of analyzing this slide with computer algorithms. So it gives us the option of tissue image analysis. And the tissue image analysis is basically detecting something on the picture and then marking it and quantifying those markups.
On this image, this is a colon of a mouse from the mouse model for ulcerative colitis, DSS colitis, dextrin sulfate, sodium colitis. This is an agent that causes this disease model for drug development. We use animal models. This is one of the animal models that we as pathologists score visually. And it’s tough, because this is a model that causes segmental disease that can be very severe in one segment, but then not so severe in another segment. And the scoring of this model is difficult. It’s inconsistent, difficult to be objective throughout this study. It’s difficult to be consistent between pathologists. So image analysis that automatically detects the regions that are affected is the perfect solution, right? So we did develop an image analysis solution for this that is actually working. So image analysis is super powerful tool for us. And actually for anybody in the medical imaging… Radiologist, do you use image analysis algorithms?
Raise your hand if you do, if there are any. Yes. So that’s the application. So we have image analysis as well in pathology. We’re 20 years behind radiology in the digitization process. Maybe not so much in the AI, because AI came after radiology was already digital. So there maybe we are not that far behind. So it’s a powerful tool, but it is not magic. And what do I mean that it’s not magic? It’s one trick pony. One trick pony that it can only do one thing for you. And if you have a simple question, then it can answer the simple question fairly well. If you have another question and more complex question, then you’ll reach the limitations of the technology. Then there are two main approaches in image analysis. You can either define the regions that you want to detect in the image with so-called handcraft features or you can give examples.
And examples is what these models or algorithms, they are called models, are trainable. So then you don’t have to worry, oh, what’s the width? What’s the intensity of the color that they have to give there, what’s the size of those cells? You just give examples and the model gets trained. Then they are being used in regulated and non-regulated environments. In veterinary medicine, regulated environment is going to be more where I work. So in the drug development industry for you, you can basically build one for yourself and use it. I am not aware of any regulations that would prevent you from doing so. I work in the DOP environment, good laboratory practice. So there we have a little bit more of regulations, but this is still less strict than what MD pathologists have. And the super crucial thing is that especially with those examples with the AI based models that we’re going to be talking about, the examples have to be of high quality. Because if you’re sloppy with your examples, you’re going to get garbage out. Basically garbage in, garbage out. Like in life, I would say.
So nothing new there, but sometimes people overestimate the power of the method and do not feed quality data. And they are then surprised that there’s not quality data coming out. So garbage in, garbage out. So artificial intelligence in tissue image analysis is cool, because we don’t have to think what parameters of the thing that we want to detect do we have to… We don’t have to think about that. We give examples. But what is artificial intelligence in general? It can be used for 20 different things and everybody has it in the cell phone and in the car, probably everywhere. There is also speech recognition, decision making, like for example, whether to give a loan or not to an individual. Are they qualified? And then you have a list of parameters that go into an algorithm and give an output. Give them money, because they will give it back or no, don’t give them money.
Decision making and language translation. I used to be an interpreter after vet school, I actually was interpreting and recently I had to translate something with Google translator and I compared the performance of Google translator from back then, which was 2009 when I graduated. And now they improved so much that really. So I did, I’m from Poland, so I had to translate something for my husband from English to Polish. And it was really good. Before, it was a lot work would have to go into correcting Google translator. Now it’s less. We can divide artificial intelligence based on capability or based on functionality. And based on capability we have narrow AI, general AI and strong AI. And narrow AI is good at one specific task. General AI can perform like a human and strong AI would be more intelligent than us.
Where do we stand with AI? Who’s for one, raise your hand. Who’s for two? Who’s for one first? Who’s for two? Who is for three? We are at one, ladies and gentlemen. You can, so I mentioned it’s a one trick pony for image analysis. For those artificial intelligence models, they are trained in one task, it’s now AI. We can have couple of them stuck together to recognize different things or to predict different things. But we are still at the narrow AI in the narrow AI phase of the AI revolution. Which then brings me to answering the question that you didn’t ask yet. Are we going to lose our job due to AI? And I think not, because I can do a lot more than just one thing at a time. And there would have to be many, many models, like an infinite number of models put together to replace a pathologists or a radiologist. Before we move on this topic, talk about based on functionality, we have reactive machines, limited memory, theory of mind and self-awareness.
If you look at the headings, there’s something that’s called theory, meaning it’s only available in theory and it’s going to give us some clues where we stand. But the reactive machines focus on current scenarios and don’t store memories. Limited memory store past experiences for a short period of time. Theory of mind understands emotions, beliefs and can interact socially. And self-awareness is mega extra everything fantastic, right? We’re not here and we’re not here yet. And let me give you a couple of examples we have where. So now AI from the real life, we have Siri and Apple and Alexa we talk to. So they can recognize speech. And we have predictive searches in Google and in Amazon. Clients who bought this, also bought that. And then I end up buying more than I need to buy. Then we have… What do we have? Product recommendations, which is what basically those predictive searches are.
And we have sales driving cars here, image recognition, face recognition for iPhone. I don’t have an iPhone. My husband has, but he has to, I cannot switch it on, because it recognizes his face. And we have speech recognition. This is actually important in pathology, because especially in reference labs, the pathologists dictate their reports and you have algorithm that are trained. They are generally trained for speech recognition and then they’re additionally trained for medical terminology and additionally trained for the voice of the person that’s actually speaking to the device. So speech recognition is a big thing. And I have a podcast, I use this a lot for transcripts. And then we have reactive machines. Deep Blue was an IBM computer that beat Gary Kasparov in chess. Then we have AlphaGo who beat the Go person, which due to my ignorance, I never remember what his name is, but Gary Kasparov, I remember.
But Go champion was beat by Google. And then we have self-driving cars. And self-driving cars has to store some memories, because they have to remember the surroundings and they need the information to navigate. But half an hour after passing a certain point, they don’t remember what was half an hour ago. So this is limited memory. And deep learning is a method within artificial intelligence that you can give those examples that I mentioned. So the classical machine learning is in our field. And deep learning is something where you can give those examples and you don’t have to worry about defining parameters. In machine, classical machine learning, wherever you hear classical machine learning, handcrafted feature thresholds, this is something that a human operator, human observer had to define before the computer could recognize that this blue thing is a car. In deep learning, we gave examples and we don’t have to define the features, we still have to give examples, but you can streamline giving examples.
So then this model here, this convolutional neural network, it’s network, because there are many dots connected and those blue things are called neurons. So that’s why it’s neural network. But basically you give plenty of photos of a car and this network figures out what to classify as car or non-car. And this is deep learning. So artificial intelligence is the big part of computer science in pathology, radiology. Computer vision is the part of computer science that’s going to be important. This is the big concept. Within this concept, we have machine learning where wherever a machine can be taught something or programmed, this is machine learning. And then deep learning is with those examples. And it’s called deep learning, because there are many layers of those blue… It’s a model architecture. And in contrast to there just being couple of layers, there are many layers now and it performs there. That’s why it’s called deep learning.
And that was what we used for our colitis detection and classification model. All this is part of computer vision. As I already mentioned, computer vision is a branch of computer science that deals with images. And for dealing with those images, especially with those big pathology images, you need better computers. And the part of the computer that is important in image processing is a GPU, graphics processing unit. And then in the end, you can have something that clinician can use as an aid. And this is called computer aided diagnosis. And here I’m going to talk again about are we going to lose our jobs? So at the beginning of this AI revolution, there was, I don’t know if it was a goal or if it was a hope that computers or algorithms will be able to diagnose so that the doctorate will not have to diagnose and it’s going to be done automatically for high throughput and all that jazz.
That’s not the tendency anymore. That’s not the direction AI is going, because it turned out it didn’t work so well. And also there was a lot of liability involved in something that was not supervised by a human, even if the development was supervised by a human. So now we want to have computer aided diagnosis. And in human medicine there’s actually one algorithm, one model that is approved by the FDA for prostate cancer detection. And what this algorithm does, it analyzes the image and highlights the points where it thinks that there is malignancy. Then the pathologist comes in and says, “Yes, yes, yes, no.” And in that way it can go a lot faster. And when they did their [inaudible [00:19:27] experiments, the accuracy of pathologists plus AI was higher than just pathologists. AI alone was lower than pathologists, but those two combined were better and faster. So this is what’s going to happen.
So there’s going to be a lot to supervise. If we think that those models only do one task. So let’s say cancer diagnostics. One cancer, one organ, one cancer. There are many organs and each can have cancer. So that’s already multitude of algorithms. Then what else can you have? There’s differentiating cancer from hyperplasia and all these different variables that you can combine. So let’s assume we’re good and we have one organ, one cancer models. For each of those models, there would have to be supervision. So it’s okay, we’re going to still have enough to do. And this is the part where I had to do research for you guys. Applications in veterinary diagnostic pathology, because I do not do diagnostics in the classical companion diagnostic way of diagnostic. I do change recognition in animal studies. But I didn’t have to do too much of a research, because the most problem I encountered was mitosis detection. And image analysis, people, they have the challenges.
They organized my dog challenge for recognizing mitosis. And the nice thing about this is that mitotic figures in animals look the same as human tissues. So it was actually a joint challenge on animal and human tissues. But they need, and they develop those algorithms and then they check which one is the best in detecting mitosis. So they met in Singapore in 2022 and the year before they met as well to detect those mitotic stickers. And let’s see if our video’s going to work. Let’s start clicking. That’s too bad, because I had two videos for you and one was funny, but that’s okay. Where is it? That’s so sad. That’s okay. Anyway, so they basically, so the normal approach is you look for them in a field of view and you find a couple of them. If you find enough, okay, the mitotic index is high enough to classify it as this and that grade of malignancy. And you do it in a couple of fields of view.
If you use an algorithm, you can do it on a whole slide and then you can use this data in a more granular way, because you have data for the whole slide. You can later correlate it with different type of data with survival and all this, those things that bioinformatician and medical analysts do. Then another thing that can be done is classification of tumors. So round cell tumors were classified with artificial intelligence with deep learning models. So that was another thing. And there was necrosis detection. That’s basically what I found for veterinary diagnostics. And this probably happens, these are pretty good publications from science or some science branch. Anyway, so what happens, how can you use it if you wanted to use it. Or you probably will not, but maybe in a reference lab. How can they use? They, especially labs that digitize slides, which I know that Ibex is, and they have a fleet of digital pathologists all over the world.
So they can basically throw this algorithm on that image before it’s even being read by a pathologist and they get a number, it confirm. Okay, did it detect and the overlay of those mitotic figures, did it detect it correctly? Yes, no, what’s the number, high enough? Okay, that’s the grade. But if anybody wanted to do it themselves, for example, in academia for animal models, animal disease models or any type of study evaluation, there is software available. Both open source for free and obviously commercially available software where you can develop those algorithms on your own. You can digitize your slides and give those examples on your own and in that way, generate a reproducible way of evaluating your data. And I had another video before questions and answers that’s not going to play, but Google it. It’s Dr. Dr. Glaucomflecken. He’s a medical comedian. He’s actually an ophthalmologist and he had a super cool video, Academic Conference Q and A. I’m not going to tell you what’s in there, but feel free to ask questions if you have any.
It was so much fun to go to a veterinary conference again. I have not been to a veterinary conference for a long time. I have been to several pathology conferences, but to a conference where you actually meet practicing veterinarians when you are not a diagnostic pathologist, this is not somewhere you usually go. And I feel pretty disconnected with the practice of veterinary medicine. First of all, I finished my vet school and practiced in Poland around 2009. And that was a lot less advanced than what the colleagues here in the US were learning. I had the chance to give a talk after two radiologists and was super impressed with how advanced artificial intelligence is in veterinary radiology. That was fantastic. Before I send you off to listen to the next episodes or previous, if you haven’t already, I want to announce something. Book the 4th of November in your calendars, because together with the Radmoud University Medical Center in Nijmegen, I am organizing a virtual Computational Pathology summit.
This is going to be an independent digital pathology event and I will have plenty of fantastic guests. They’re researchers that are at the forefront of computational pathology. They’re going to join me for podcast style lectures where I’m going to be asking them questions. And we’re going to have a day long virtual event where we’re going to be explaining all the aspects of computational pathology that you need to know to join the digital pathology revolutions, start contributing immediately and make a difference in patients’ lives. This event is a result of a sabbatical I was privileged to take from Charles River Laboratories. There is a sabbatical program where employees can take one month off to do a charity or professional development project. So Charles River was kind enough to give me this time off. Radmoud University Medical Center Computational Pathology Group led by Jeroen van der Laak, Geert Litjens and Francesco Ciompi was kind enough to host me and they agreed to be guests at this event. But that’s not it.
The cool thing is that all those lectures are going to be later published under the Creative Common Attribution license. What does that mean? It means you can have them forever and do whatever you want with them as long as you attribute the authors of this work. So as long as you say who did it in the first place, you can take it and run with it, learn from it, remix it, do whatever. It’s for you, for my digital pathology people to join me and the Computational Pathology Scientists on the 4th of November. There’s going to be a registration form under this episode in the show notes of this episode. And also, do me a favor, share it with someone who you know is going to be interested. That is going to mean so much to me. I want this to go viral. I want this to be accessible for everyone in the digital pathology space.
Doesn’t matter if you’re taking your first steps or you’re already super advanced, you are all in. No matter where you are in your digital pathology journey, this is for you. And you’re going to find something for you. And this is obviously totally free forever for you. Just register so that I know where to send you the invite and later the recordings. We’re going to go live on social media, on LinkedIn, YouTube and Facebook. And later I’m going to be posting this through all the possible channels and you can just take those lectures and host them, post them, do whatever you want with them if you find this valuable. So I’m so excited to have you there and talk to you in the next episode.