DPP sponsors:                            

How AI is Transforming Veterinary Diagnostics w/ Richard Fox, DVM, Dipl ECVP | Aiforia

How AI is Transforming Veterinary Diagnostics w/ Richard Fox, DVM, Dipl ECVP | Aiforia

In this insightful episode, Dr. Richard Fox, a seasoned veterinary diagnostic pathologist, shares his unique journey from traditional veterinary practice to the cutting-edge world of artificial intelligence in diagnostics. With over two decades of experience, Dr. Fox provides valuable insights into how AI is reshaping the diagnostic landscape in veterinary pathology, enabling faster, more accurate results.

The discussion dives into the complexities of integrating AI into established workflows, the challenges of maintaining quality control, and the growing significance of AI in improving diagnostic precision. Dr. Fox also explores the ethical considerations and future trends in AI-assisted diagnostics, giving professionals a clear roadmap on how to navigate the evolving field of pathology.

Key Topics Discussed:

  • Dr. Richard Fox’s Career Evolution: From clinical practice to pathology and eventually AI.
  • AI’s Role in Veterinary Diagnostics: How AI improves workflow efficiency, reduces time delays, and enhances diagnostic accuracy.
  • Quality Control and Validation in AI: Addressing the critical importance of quality control and retraining AI models for reliable outputs.
  • Challenges of AI Integration: Insights into the obstacles of implementing AI within diagnostic workflows, including time constraints and the need for seamless integration.
  • Future Trends in AI for Pathology: Predictions on how AI will revolutionize on-premises diagnostics, direct-to-digital imaging, and more.

Episode Resources

SUPPORT THE SHOW

Be Part of the Pathology Evolution: Stay informed on the latest in digital pathology innovations. Subscribe for more insights, become a member of the Digital Pathology Club, and get your complimentary copy of Digital Pathology 101. Embark on your path to discovery and progress in the fascinating world of pathology.

EPISODES YOU WILL ENJOY

TRANSCRIPT

Introduction to AI in Veterinary Pathology

Richard : [00:00:00] A lot of the early, now adoption of AI in VetPath is low-hanging fruit, as it were. So the tasks that are easy enough to implement are very time-consuming, because that’s the other issue with veterinary diagnostics, is couriers and postage. The main delay is getting that sample to the lab. So as I see it, a lot of these on-site scanners, that’s going to change the field quite a lot.

If your integration. Isn’t fast or real time. Can you offset the results or can you not? Because if you can’t offset the reporting of various results, then you need something quicker than you. So I think as a major impact on which model you’re thinking of designing, how it fits in with your workflow.

And I think that’s something. quite critical to consider before you go ahead with doing a model. To me, integration is critical.

Meet Dr. Richard Fox

Aleks: Welcome, my digital pathology trailblazers. Today, my guest is Dr. Richard Fox, and he [00:01:00] is a veterinary diagnostic pathologist specializing in histopathology and cytopathology. He’s also working as a pathologist for Aiforia, an AI image analysis company that is sponsoring this episode. So thank you so much to Aiforia for being the sponsor of the episode.

And I am always super excited when I have a veterinary pathologist on the show, because I feel like we are such a niche pathology specialty,  that I always get excited when, you know, we talk about veterinary pathology and how, like, it’s not super, like the pathology skills that we have are pathology skills and then they’re being applied to veterinary pathology.

 and I think not everybody gets that, but, welcome Richard to the show. How are you today? 

Richard: I’m very good today. Yeah. Yeah. I’ve got I’ve got quite a few tasks ticked off this week, so I’m, feeling a bit more positive about the weekend, so. 

Aleks: Oh good, so you’re in a good [00:02:00] mindset for it and a good space to be here today. So let’s start with you.

Dr. Fox’s Career Journey

Tell the digital pathology trailblazers about yourself. and how you got to where you are currently in your career. So, the path towards pathology and veterinary pathology, tell us about that, but also then about the transition to Aiforia and, getting engaged in the AI space for pathology.

Richard: Sure. Yeah. I, I qualified as a vet. from the Royal College,  Royal Veterinary College in 98. That’s a long time ago.  so I spent a couple of years as a,  mixed practitioner doing small and large animals. 

Aleks: Me too! 

Richard: Yeah, and, and I just, I did,  I think my, my main decision was, I was, I was, it was about two o’clock in the morning, lying in,  farmyard muck in the snow trying to carve a cow [00:03:00] with a jack on the cow went down and went No, I think I think this is it.

I think I need to find something a bit more mentally challenging or physically challenging. So I always liked pathology as an undergraduate. I think it was one of the minorities that like pathology.  And so you like histopath and then during my time in clinical work, I actually established I did all cytology on premises.

So I developed the lab there doing Cyto. So, I think it was a natural progression. I was obviously interested in the cytology part of it. So, I think I made a decision to do a residency at Liverpool University. So I did that for three years And did a lectureship there with doing veterinary pathology and doing skin pathology,  then I think I decided that academia, I didn’t have a PhD, I wasn’t a big research person.

I was much more into, I think I still liked helping animals [00:04:00] directly. So diagnostics was the sort of main focus, I guess, really. And then I then moved into private diagnostic practice.  which I’ve been in for, yeah, 22 years now in various companies.  So doing, so we do, I do mostly small,  small animals, so dogs and cats, but a few other bits and bobs.

So histopath, cytopath, were doing histochemistry as well. So that’s been my main, I guess, focus for that time. But then I think your career sort of starts to, the challenge starts to plateau a bit. You think, is there something more, you know,  I enjoy doing my diagnostics, but is there something else to, to challenge me?

Transition to AI and Aiforia

So that’s kind of how I got into the AI. But the only trouble with fitting in AI and veterinary diagnostics is time.

So I wanted to do more AI. We didn’t have the time to give me. So then I decided to split my role between the two so I could get more [00:08:00]  experience in doing AI work. So essentially that’s how I got into working for Aiforia. I applied for their, their, job and through subsequent, many interviews I eventually got the position.

So that’s kind of where how I’ve got into AI, I suppose from my background. I, I’m actually a photographer as well. So I do a lot of… 

Aleks: Oh really… 

Richard: …a landscape. So imagery has been… 

Aleks: Actually I did see your website. I did, because you have it linked on LinkedIn, right? 

Richard: Yeah, so I’ve been doing that for 12 years now. So I’ve been doing a lot of landscape photography.

And so imagery and colors have always been part of my life, I suppose. And digital image is a major part because in Finn we became digital, fully digital on Histopath 8 years ago. So we did it quite early. 

Aleks: Oh my goodness, congratulations. 

Richard: Wait, so I was employed at Finn. I changed my [00:09:00] jobs and I was actually employed to help digitize,  Finn pathologists with two other pathologists.

So we were, we were the guinea pigs. 

Aleks: And that was, which year was that?  

Richard: That was, well, that was eight years ago, so. 

Aleks: It’s going to be 2016. 

Richard: Yeah, 2016. That’s when yeah.

Aleks: Congratulations. I want to emphasize that because that’s very early in terms of the digital pathology adoption and I sometimes give this presentation where I show the digital pathology timeline and I’m always emphasizing that before there was an FDA clearance for a human pathology lab,  veterinary pathology labs were already digitized and it was the same system.

So the clearance was for Philips IntelliSight and IDEXX was using the same system. So I don’t know what systems you guys were using. 

Richard: So, but I think the main reason for us digitizing was [00:10:00] recruitment because It’s trying, I mean, there aren’t very many veterinary diagnostic centers around. So if you’re, if you’re a sort of middle-aged pathologist with children, moving from one area to another is not easy, is it?

So I think that the main reason, certainly why FINN decided to digitize, was to allow easier recruitment. because you could get offsite workers then. So, I think that was, that was critical. And we increased our pathologists, you know, quite considerably just by going digital, so that they didn’t have to move.

So, we’ve kind of had a 50-50 split. But since COVID, I think a lot of people have actually enjoyed working away from the office and, and… 

Aleks: Yes. 

Richard: …we have more remote workers now than we had because before we had a lot of on site, you know, come up with 50-50 split really. 

Aleks: And that’s also, I think it’s a mental and societal shift for this to [00:11:00] be accepted and now like required by people who, who are being approached by recruiters, right?

I know it is my requirement. Okay. If I get a potential job offer, my first question is, is it remote? If not, we don’t need to waste time talking about it. 

Richard: Yeah. 

Aleks: So,  it is now a huge driver of being competitive in the market, just being, being digital for pathology, for digital pathology. 

Richard: I do miss the interaction of people.  It’s a lot easier. It’s nicer to interact with people than it is working offsite.  But then you get a lot more work done offsite because you have less distractions and less, you know, less interruption. So it’s, you know, it’s a real balance, isn’t it really? 

Aleks: I moved all my interactions to digital, to Microsoft teams.

Oh my goodness. Like I, I call people, I send them screenshots. I’m like interacting with more [00:12:00] people than I used to when I was on site. 

Richard: That’s true. 

Aleks: And, but it’s something, you know, that comes kind of with,  what tools you, you started working with. And then obviously there’s like probably the generations of pathologists after me.

They’re not that digitally native in terms, of other tools than just viewing the digital images. And people who are younger than me, they’re, it’s going to be like, they’re going to blow our minds. 

Richard: The younger patholigst will just be normal. 

Aleks: I recently talked to a, colleague’s daughter. She’s, my colleague is a veterinary pathologist as well, and her daughter is 13.

In terms of like technologies information, you talk to a 13-year-old, like to an adult, and you can ask questions and, and like detailed information about technologies [00:13:00] and,  like, really better than, than, than people a lot older. So this is an interesting shift as well. Okay, going back to… 

Richard: You do most of your learning when you’re very young, aren’t you? You absorb it. Right. When you’re young. It’s not difficult to learn, is it? 

Aleks: It was funny. I was talking to her and she was like, I’m like, how do you even know that? And she’s like, We have internet. Yeah, I have internet too, but I guess I didn’t have it when I was 13. So that’s the difference. Yeah, so. That was an interesting interaction.

Okay, so that was,  that’s interesting. So, first thing that’s interesting and,  relevant to our conversation. I mean, it’s a relevant conversation in general, but the first thing, okay, you decided you needed AI to help you in the diagnostics, but you didn’t have the capacity to do it. So there was an option to outsource it.

And [00:14:00] there was an option [00:14:00]  to work with a, highly specialized team in your specialty, right? With, Also fellow veterinary pathologists and with a team that’s used to doing these projects. So this is something that, Aiforia offers. And, that’s an excellent intro to AI for people who actually have to keep doing their job and keep, you know, like contributing in their current position, but you are so fascinated that you decided to spend more time doing AI.

So like,  you guys decided to outsource or selected your use case for the KI-67 and. So can you explain how important this selection was or is in general in veterinary pathology? Specifically for diagnostics. So is there did you need to develop it from scratch or is there already a wide? selection available How does that [00:15:00] work in the diagnostic space?

Richard: I think, I think we’re in a state of flux at the moment because AI is still in quite a youthful stage. I think the idea, I mean, certainly with my experience at Aiforia is that, you know, obviously we’re developing a lot of models, we’re developing a lot of demos and things like that, but there comes a point where you think, can I use this previous annotations and training to what we could transfer, learn, and Into other situations where those models actually might perform quite well, but on different tissues.

So, but at the moment, certainly in the sort of VET area, I mean, obviously we do KI67 quite a lot in the human field. So, you know, that’s quite established from an AI point of view, but obviously from the VET point of view, we’re still in very preliminary stages. So, obviously tumors don’t really get mast cell tumors very often, whereas dogs [00:16:00] get mast cell tumors quite, quite a lot.

So there wasn’t really that any of those developed models, certainly for mast cell tumors that have been previously made by Aiforia. So we had to start from scratch. So this was a new, a new model that was developed,  for us.  The result of that piece, well, there was, there was two stages.

We developed quite, quite a nice model and it was working nicely and we were getting everything together.

Quality Control in AI Models

And there was a few months delay and this kind of this ties into the QC part that I wanted to talk about was that you make quite a few mistakes and you learn from them. So in our wet lab procedures, because we all we all thought you know, AI can do everything, you know, AI is brilliant.

Well, you know It’s only as good as the data that you put into it.  So our model was working nicely You There was a few months delay, and then we [00:17:00]  restarted the project, and then we realized the model wasn’t working very well. And we were like… 

Aleks: What happen?

Richard: …what the hell’s going on here? You know, what’s changed?

Something’s changed. So we look back, the immunochemistry protocols hadn’t really changed, sectioning, all that sort of stuff. But we’ve, we realized that we’d changed from glass coverslipping to tape, and because the thickness of the chemistry, and then refractive index was different. the results of the scanning was different.

So the model was not performing well. So we had to then retrain the model because we changed the wet lab procedure. So this, so my, our, my experience of QC is being founded by the errors, the, or oversights that we made to that. And the, the knock on effect of that is that we then realized that maybe our immunochemistry wet lab procedures weren’t optimized either because we were using [00:18:00]  a pressure cooker and the KI67 results were coming out quite variable because we realized that the pressure cooker wasn’t consistent.

So the antigen retrieval wasn’t consistent. So sometimes the KI67 would come out really strong and sometimes quite weak. So the QC part of the IHC run was suboptimal as well. So we finally kind of realized that we really need to drill down on our wet lab procedures because The KI- 67 is quantitative. It’s not subjective.

It’s not like, often like immunochemistry is, isn’t it? It’s the antigen present. Oh, it’s a melanoma. It’s how much of that, or how many there are. So because it becomes quantitative, your QC becomes extremely important because it’s just not an off, it’s not binary. It’s quantitative. And so that’s where my sort of background, so I kind of wanted to talk about it, and I’ve written a white paper on it, [00:19:00] which hopefully we’ll link.

Aleks: We’re going to link to this paper in the show notes. 

Richard: To think about, because people go, one question that people ask is like, you know, what should I do? We want to do AI, what, what do we do? And I think that the first step is…

Aleks: Make good slides. 

Richard: Yeah. You need your, your sort of upstream QC needs. You need to start thinking about that.

And I’ve speaking to other potential new clients recently, and it’s just like some premises, a multi set multi center premises, but they’re all doing different.  They are all say sectioning,  differently. They’re all staining slightly differently. And you think, well, maybe before you start selecting your training data, you need to collaborate and get a common, like almost like a ground truth for your wet lab [00:20:00] procedures, because if you don’t have that consistency, then you get massive variability in your, in your data set.

So. Training is more difficult and also if you’re using different scanners and different wet lab procedures, the results from the AI are going to be different unless you train more and make that model more robust, if you sort of mean. So it requires more training because you’ve got such variability. So, so from my experience, you know, at FINN when we were trying to develop our model, Is that, you know, you do, there are an awful lot of things that you can do before you start implementing your AI,  to make that, to make that model perform better.

Aleks: So, because, you know, when you go digital just for, you know, visual assessment. It doesn’t really matter that much because you basically like reproducing,  digitally reproducing what’s on glass and you can read through a lot [00:21:00] of artifact and still have a diagnostically okay specimen. But then the moment you want to automate something with a tool that relies on the image properties.

And it’s, it’s interesting because you know, the different scanner is kind of like, everybody thinks about that different IHC, everybody thinks about that, but I think it’s the first time I heard about, okay, we did a glass coverslipping and then we switched to tape and I mean, it’s logical. It’s one of the parameters, but it’s,  one that, you know, I’ve just heard less frequently.

Richard: But it’s the same scenario, at least it is. 

Aleks: Yeah, right, just like a different point in your workflow. 

Richard: Yeah, and it’s about training. So the more training you do over a wider variability of your data set, the more robust the model will be. But obviously, you know, we didn’t train with that new cover slip. So, that [00:22:00] model wasn’t robust enough to deal with. 

Aleks: Did it take a lot of adjustment to have the model perform again, or just a few examples was enough? 

Richard: We decided to do a new version primarily, I think. 

Aleks: Oh, okay. 

Richard: If it was just that, I think you could, I think it was probably just a tissue annotation. You could probably put,  new backgrounds in and things like that.

However, because. We’d also got new scanners because they were becoming end of life. They had new scanners and then we changed our AI-IHC. So what we decided to do was actually just pause, sort our wet lab procedure out, sort our scanners out, then resumed again. So it added a delay, but we thought the best thing to do because so many things had changed was actually just to do the same model, same slides, but just retrain it from scratch.

Aleks: But did you use the previous version and then adjust it or did you totally from scratch. 

Richard: No, because [00:23:00] we felt that so much had changed that it would probably be better to start from scratch I think if one thing had changed if the if it was just a couple slipping I think you could probably just use just added more training to the previous model but I think in this instance, so many things have changed.

We decided to no one, but yeah, I think if, if the changes of mine, you could just add extra training on top of that. And I think it would have been fine. 

Aleks: And if you like know exactly, okay, this is the change in this part of the workflow and this is how it influences, but then before you have your wet lab or just the lab part locked down,  then probably it’s, you would have ended up like retraining three times instead of just one time when you had everything locked down.

Richard: You know would have been. Not cost-effective either, because obviously we were outsourcing that. So I think all in all, [00:24:00] that’s why we decided to just to do it again. Wait till we had all of our wet lab procedures, as you say, locked down and then, and then go forward with it. But I think if it was just, if we’d had everything fine and it was just one change, I think, you know,  just, just adding more training to the original model would have been absolutely fine.

So, but it was just sort of to highlight that, you know, there are important factors to consider.  because like you said, when you scan in a slide, the human brain is so adaptable. It can see through all of the artifacts, it can, you know, it’s got a great, you’ve got experience, you know, what is an incidental lesion, you know what, because you’ve learned over years and years and years of training.

Well, it’s the same with the AI, but if you’ve only trained it for a week, it’s not going to have the same ability as a pathologist that’s been trained for 20 years, is it? So I think training is, is [00:25:00] directly proportional. 

Aleks: Yes. So now it’s working right? 

Richard: Yeah. 

Aleks: And you’re using it. How do you monitor that it keeps working?Like do you have a procedure to keep it in check so that you catch it immediately when something changes? What’s your approach to that?

Monitoring AI Performance

Richard: So that was going on to my next point is that we decided. 

Aleks: Yes. 

Richard: Obviously, what we were trying to do is replicate a human. eyeballed test, which is quite subjective, and prone to variability because we all know about in pathologist variation.

We were trying to replicate that. And did we have sufficient faith that we were replicating it in the same way? Have we got a bias? Do we have, is our scoring higher than, than the references for KI67 scoring, are they lower? We, decided that maybe we didn’t have that confidence. So what we decided to do was an [00:26:00] outcome study.

So we had a cohort of, nearly 200,  cases of animals that have been treated for mast cell tumors.  We analyzed all of those tissues with, our new model. And then we look at all the statistics, which we’re actually hopefully presenting at ESTP,  this year.  We found a very strong correlation, obviously, with KI67 and outcome.

So overall outcome and tumor recurrence.  So it did actually force us to do, you know, a validation of that model to see because obviously, you would assume that the model was doing the same thing as a human, but can you actually say the same? So we basically did regression analysis and we actually found a very, very strong, strong correlation, perhaps more than.

In, in the human papers, because of the consistency of the AI,  it’s not subjective, you know, [00:27:00] as subjective. So, that was very important for us to do that, and that gave us a great deal of faith that we were doing, doing the right thing, and we were getting good results. And that knocks on to, How do we, how do we monitor the performance of this model?

So I guess we have to think about quality control, and implementation like you would do with a biochemistry analyzer or a hematology. You’ve got to standardize it. You’ve got to then monitor that the,  but that machine essentially, it’s a, it’s a machine at the end of the day, isn’t it? Doing a test. Is that wandering off your, you know,  acceptable parameters?

So we, we’ve had to come up with, you know, an idea of how are we going to monitor the performance? I think monitoring IHC is quite difficult because,  unless you had the same test light every time, how are you going to monitor your IHC? So that, [00:28:00] that’s always been a bit of a question as to how are you going to monitor.

Your quality control and your immunochemistry. I don’t know how anybody else does that, but that’s an important question, but it remains unanswered in my brain. But the other thing you can do is obviously quality control your scanning and your, other, you can monitor your, your output of the model, the platform, and your scanning by, Using a test slide.

Obviously we, we have two scanners, but although they’re, they, they are QC and they are standardized, they don’t come up with the same results because they are two physically different scanners. So the results are slightly different.  So the idea was to use a test slide and use a weekly run and you look at the metrics that you get from the results and monitor that every week that you’re doing tests.

So if you’re running the KI67 every week, you can use [00:29:00] a QC. 

Aleks: You have to do the slide check if it’s. 

Richard: So you run your analysis on the slide to get your numerics back. And then you just, I think, you know, you probably have to establish baseline. And then see if it’s wandering off, is it within tolerances, if it’s within tolerances, that’s okay.

But it also allows you to think if it goes right off, you think something’s changed. So has the scanner gone out of calibration? Has something happened to your wet lab procedures? So I think those quality control measures are very important, whatever machine you’re using. Because you have to have faith in the results you’re getting, which directly are going to affect the treatment that your patients are going to have.

So, if your KI677 went really high, sky high, artificially, they’re going to have potentially a lot more intervention, costly intervention. Like chemotherapy or further [00:30:00] surgery or whatever in further imaging and that’s going to have a bearing on it. So I think you do have a responsibility to, to quality control those, those tests that you’re doing.

Aleks: But what a great and very straightforward way. Of doing this because the pushback I often hear is like, Oh, how, when do you reversion? How many pathologists do need to annotate, like how much more resources do you need to put in into maintaining the use of this algorithm that you already put so much resources into developing?

And this thing is, you know, you do the work once, it’s super simple. If something is off, you check, okay, what has changed? 

Richard: I think that’s the difference in, you know, in some, some CRO type situations, you’ve got, you’ve got,  you know, a drug development, you’ve got one, one situation where [00:31:00] you’ve got a cohort of lab animals, you need a model to run once over that hole.

And you’d be, that’s the one you expect, which is fine, isn’t it? Cause you, the trouble is when you start going on long-term models, you The problems with drift become quite magnified, don’t they, that the longer you use it, you can, you know, you can have sample drift, you can update,    something changes in your, your,  population of animals, something, you know, changes over time.

You’ve got possibility to drift in wet lab procedures, scanners getting older, all those things. So I think maintaining long term models is key. That’s much more of a challenge than just the one off project AI,  model, isn’t it?  And that’s what we’re finding in obviously, Yes. In the site, because they want to be using those long term.

Updating Tumor Classification Models

The other thing is that, you know, the [00:32:00] classification of tumors and grading changes. So you end up having to add additional grading criteria to that model. So that model needs updating, but then you have to revalidate. 

Aleks: So that is true. 

Richard: You’ll have to establish… 

Aleks: You cannot escape it. 

Richard: No.

Challenges in Model Validation

So you have to go back again and revalidate.

And that’s the other question is how far do you go back to revalidate that model? Does that have to always comply with version one when you’re on version 10?

Software Development Analogies

So, I think it’s a bit like software, isn’t it, that after a while, software companies abandon that support for version 3, because they just can’t.

Aleks: Yes, for old versions. And that’s natural in the software development world, you know, you’re not going to have support for Windows 98 anymore,  because we’re already in 2000. What, what do we have? Windows 10, 11, right? 11, everybody’s like fearing changing to 11 because [00:33:00] they’re already used to Windows 10.

But yes, I guess it’s like kind of, let’s say not philosophical, but just an industry question. Okay. What, how do you choose to do it?

Continuous Model Monitoring

And,  you’re totally right for one off projects, when you don’t continuously use the model like you do for this,  our example for diagnostics, you like continuously use it.

Like I don’t know how many samples you run through it a week, but basically, it makes sense to have this one slide every week and check if it’s working for one of the projects. You will have to do it every time. 

Richard: Yeah. That Yeah, I mean, the only thing about chemistry slides is they do fade over time. So at some point, we’ll probably have to refresh that test slide.

But the other thing was, obviously, if you take a break in running that model, if you’re running that model every week, the monitoring is quite easy because you’re [00:34:00] checking it and checking it and checking it. Whereas if you think, oh, we haven’t used that model for three months, but we didn’t do a QC check every week, you know, even though we didn’t get any samples in or something like that, That’s when you start risking something unexpected happening, I suppose.

Aleks: Would you still do the check every week, even if you didn’t have samples? 

Richard: I think you should. If you’re, if you’re… 

Aleks: Just to avoid this like, oh, now seven different things changed and it’s not working after three months and we didn’t check. 

Richard: I think if you want… 

Aleks: I guess it’s like maintenance of all your lab equipment.

Richard: That’s right. 

Aleks: You have to do it periodically. You have to have in your SOPs. Okay. How are you doing this? How often are you doing this? And if you don’t do it often enough, then you’re not compliant and that’s no good, right? 

Richard: Yeah. Yeah, and I think I think it’s the same I mean, I would say that the from our from our experience that making the model was actually quite easy, it’s maintaining the model and making sure that [00:35:00] that’s running well that that’s actually, you know having to write the whole SOPs and the whole workflow from start to finish and making sure you follow this.

Aleks: Because you’re, you’re inventing this, basically. There’s not really anything that you can adjust. You have to search and look. 

Richard: It’s great. It’s great. It’s just, there’s a lot of things that maybe you overlook or maybe don’t think about when you start the AI. And then after a while you say, Oh yeah, I think we need to do that.

Yeah, I think we need to do that. And if, you know, I mean, some centers are experienced, but I guess as a fairly diagnostic company, not having not done AI. Right. That was something that we’ve learned, you know, through and through developing the model.

Integration of AI in Diagnostics

Aleks: So, so how difficult was it integrating it into your workflow?

Richard: I think that’s another critical aspect. And it goes back when people say, what model should I do? And, and I always say, [00:36:00]  how are you integrated? So obviously something like, particularly like something like cytology, the turnaround time is quite short. They really want it same day. So if you’re spending eight hours trying to upload whole slide images to the cloud, that’s going to delay your turnaround time.

So certainly for us, our integration with Philips wasn’t, wasn’t perfect. So we chose a model that we would normally expect two or three week turnaround time because we sent it away. So that was our choice because we knew that our integration wasn’t quick.  in an ideal situation, if you had something that was very quick and basically once scanned went straight to the cloud overnight.

The pathologist was reading it in the morning, you could have a model that, you know, would, you know,  would satisfy your short turnaround time. So something like a, I don’t know, mitotic counter,  or,  you know, [00:37:00] quantification of other objects or, you know, a grader or something like that, that you needed a fast turnaround time.

If you’re scanning, say, the day before, and your integration is fast and good, and it, you know, maybe triage that sample, it’s tagged, and it’s already gone to the cloud for analysis, or on prem analysis,  then your turnaround time can be really short, which changes your idea of what model you can perform, if that makes sense.

It’s kind of ticking the eggs. 

Aleks: You need the results when you look at the slide. 

Richard: That’s right. 

Aleks: So then you, like, work backwards. Okay. like you say, cytology.  If I look at the digitally, then it would have to basically happen at the same time as it’s being scanned. And there are, you know, depending on the company, the scanners, there are different integrations and where you can actually have this run on the slide while it’s being scanned.

This is definitely a question to ask [00:38:00] yourself, do you need that? Can you have that? If you can, fantastic, how are you going to integrate? If you can’t, what’s the other way you can,  you can still leverage this? 

Richard: Yeah. So, you know, can you, can you, can you, if, if your integration isn’t fast,  or real time, can you offset the results or can you not?

Because if you can’t offset the reporting of various results, then you need something quicker than you. So I, I, I think as a major. impact on which model you’re thinking of designing, how it fits in with your workflow. And I think that’s something quite critical to consider before you go ahead with doing a model, because you could say, okay, we want a melanoma, model that I, that were graded for us, but we can’t, we can’t get the results for 24 hours.

Well, that’s not, your clients aren’t going to be very happy, are they? So, I [00:39:00] think it has a really, I think to me, integration is critical. And I think, you know, in some instances it could be improved, you know, or having better integration between AI platforms or, image viewers, or, you know, the market’s quite heterogeneous at the moment, isn’t it?

With different scanning, many sectors, and different viewing platforms. There’s, you know, there’s various,  there’s various applications where AI models can be in, you know, integrated into viewing platforms as well. So at the moment where I see it’s, it’s quite an immature market, isn’t it? That there’s a lot of players, but there’s not a lot of commonality.

I think it’s, it’s coming, isn’t it? But there’s a lot of choices around and you can do, yes, I can use that model on that platform to do that one. But on that platform, I can’t use that one. And [00:40:00] I think pathologists just want everything in one. Spot don’t they they want to click a button and it’s there. They don’t have to go into… 

Aleks: Yes, right? Yeah, yeah, and then you basically choose what you need for this. Actually, no ideally it should only show you what you need for this you should not be able to you should not need to click anything unnecessary. It should like know your workflow, whatever, like KI67, that should be or nothing else.

Richard: So in an ideal, right? 

Aleks: Yeah. 

Richard We’re not, maybe not quite. Yeah. It would be great. 

Aleks: Yeah. And there’s a lot of questions that you have to, a lot of workflow analysis and a lot of questions that you have to ask yourself to be able to have that, that is kind of in line with how the pathologists work,  that it doesn’t slow them down and it still satisfies the timelines, the turnaround times [00:41:00] that you are expected to meet.

So do you see any, like trends in the integration, but also in the work of,  veterinary pathologists? Like do you see any common direction? Regardless, you know, of the heterogeneity of tools that everybody is using. And I think it’s okay to be able to choose the tools you want for the reasons you want, but then it’s never going to be the only tool in your lab.

So the integration is critical, like whichever one you choose, it has to work with the equipment that you already have. 

Richard: That’s true.  So I think, I think the flexibility of integration is important. So, I mean, certainly my work with Aiforia, we have quite a lot of different points of integration because the systems are so varied.

We’ve got, we obviously, we have an integrations development department as well, so, and, [00:42:00] and I, and I link with them quite a lot because I obviously work with them with the FINN side. And so there’s, there’s quite a few different tools, you know, you’ve got APIs, you’ve got bridge connectors you’ve got. So they’re a very different way of getting that, those very large whole slide images data, you know, up to a cloud platform. I mean, obviously you can use it locally, but that also has issues, doesn’t it? About, you know, infrastructure updates and running it locally and all that sort of stuff. Whereas, you know, cloud is kind of run for you unless you’re,  sub and unless you’re having, you’re, you’re in the U S and as your guys don’t like it. Has recently that was quite catastrophic for many companies, wasn’t it? 

Aleks: Yes, that was actually this week Yeah, hold the computers down. Yeah So, you know, that was interesting. That was the day that computers stopped working. 

Richard: Yes.

Trends in Veterinary Pathology

That’s the one the day the world ended [00:43:00] But yeah, so, you know, and I mean certainly trends that I’ve seen I mean I would say in vet with diagnostic pathology rather than obviously You pre-clean ToxPath things like that because they’ve been using that, you know, for quite a while, but I still think it’s in quite a preliminary stage in, in veterinary diagnostics.

And I, I’m not sure… 

Aleks: What would you use it for, or like, what would you recommend using AI for? 

Richard: I think certainly the, the way I think the market’s gone as I see it, is that a lot of the early now adoption of AI in that path is low-hanging fruit, as it were. So, the tasks that are easy enough to implement that are very time-consuming.

So I think a lot of it is things like fluid countings. So, you know, urine analysis has gone AI. Companies are doing so on prem AI analysis [00:44:00] MRI analyzers now that actually scan it on site because that’s the other issue with veterinary diagnostics is couriers and postage the main delay? Is getting that sample to the lab so as I see a lot of these on-site scanners that’s going to change the, the, the field quite a lot, I think. So I’ve seen obviously things like urine cytology, ear cytology is, is going sort of AI on-prem as well. So, so obviously that, I think from what I understand, you can either get a pathologist to look at it, or you can get AI to look at it or both.

Aleks: So basically you scan wherever the sample is made, and then the digital image is going, you, you don’t need to send it anymore. 

Richard: That’s right. So I think, I suspect that’s going to be the way that we’re going to reduce turnaround times.  and also I guess costs long term [00:45:00] because you’re not going to have to be employing the postal system where it gets lost or damaged or couriers, you know, you’re adding, you’re adding, you know, at least sort of, I guess, four to 24 hour for that sample to even reach the external laboratory.

AI in Routine Diagnostics

So I think, you know, on-prem scanning and then AI, you know, implementing that, is probably the, certainly the major direction that that path is probably going to be going into, and then that opens up the possibility of cytology-AI,  because obviously, that doesn’t need any special, special equipment.

But also I’m involved in another group… 

Aleks: Yes, you can do in the clinic. 

RichardL There’s another group I’m involved in called Muse Microscopy and they’re using,  UV light,  or  tissue or fluorescence. So obviously that…

Aleks: Yes, you know who I had on the podcast. I had,  [00:46:00] Dr. Richard Levinson, and this technology came out of his lab.

Yeah. And I was like, oh, this is so cool. And now I know that a SmartHealth DX is actually calidating it with veterinary pathologists and you’re one of the pathologists. That is exciting. So do you like it? 

Richard: Yeah, I can’t. I have an NDA, so I can’t talk. 

Aleks: Okay. Let’s not talk about it. But,  but basically it’s direct to digital.

So once you have. digital image, you can do whatever you would do with a digital image anyway, right? 

Aleks: Yeah, so the options are because that would be no it’s an on-prem scanner. So  you know obviously using uv light to fluoresce the tissues and then you know gaining those digital images from that without having to go through the normal histopath process.

So, you know, removing the water and replacing it with paraffin wax and formalin, formalin fixation and things like [00:47:00] that. You’re obviously skipping. 

Aleks: This is like at least eight hours process, right? I remember during my residency, we had, we had to switch it on, on the day we were leaving, like in the afternoon.

So that the histotech in the morning could start work and working on it. And I remember a couple of times forgetting to do this. And then I had to like drive back to the lab in the middle of the night and then send an email. Hi, I need to be there at 6 AM. 

Richard: So, so you haven’t got that. So you, you’ve, you’ve cut the processing time and you’ve cut the, the courier delivery time.

If, if that’s a valid option, that could be a direction where you’ve got on-prem,  scanning of… 

Aleks: The direct to digital scanning. 

Richard: And then you can do AI. 

Aleks: Then you don’t need the equipment. I mean, probably maybe for some samples you still need it. There is [00:48:00] going to be a transition period. But this is like the main disadvantage or pushback that I hear from people who are like, you know what, we’re not going to do digital pathology because it’s basically like scanning your printed out printed photos.

That’s adding the step of digitization on top of analog. When it’s direct to digital, then we’re going to consider it. So that’s why I was so excited to have, Richard Levinson on the podcast, and he was talking about different direct-to-digital technologies. But this particular one is mature enough that they already have a device and there are studies going on. They are starting veterinary medicine and they want to expand to medicine as well. 

Richard: Yes. And also it brings up the topic of, of AI-based staining as well. So you can cut,  using fluorescence. To give you your light basically on you. But then you’re [00:49:00] using AI to actually make an H and E for you out of that, instead of having to go through the whole process of standing for H and E.

So I think those sorts of, ideas are going to progress and that’s going to have a bearing on AI because,  you know, certainly doing some, multi-channel fluorescent stuff. If you look at that. You know, and, and, and developing models for that. And you think, God, I wish I had multichannel in my diag…, you know, immunofluorescence in my diagnostics.

Cause I could say that that’s a CDA positive, you know, T cell that CD. It would be fantastic. I know it’s, it’s slow and it’s, it’s extremely costly. But those sort of thing would be absolutely fantastic in routine diagnostics, but so I can only see those sort of technologies really progressing quite, quite nicely.

And that allows for, you know, [00:50:00] first faster turnaround times. I suspect the adoption of AI following those improvements will be even greater.

Addressing Common AI Concerns

Aleks: So when people, hear that you’re doing AI and you start talking about AI, what are the three most frequently asked questions that they ask? And what are the three questions that they should actually be asking?

Richard: Number one, and I know these have come up in your previous webinars, is that “Is it going to take over pathologist jobs?” And that’s always the one. And I think, and I’ve, I’ve had that when we’re trying to roll out the AI, at my, at field pathologists, obviously people worried, like, well, are, are you doing this so that you can get rid of us?

You know, and it’s a legitimate question. I think the thing with pathology is that we’re, as, the amount of work that we’re expected to do is going up and the number of pathologists that are actually available to do the work is going down. And I think [00:51:00] that’s permanent. That’s that’s across the board. I think isn’t it?

We’re not… 

Aleks: Yeah…I think all medical specialties. It’s basically you don’t have enough doctors. And in the U.S. you have like this tiered system of healthcare professionals where you have like registered nurses and assistants and like you kind of build up so that not only the like most rare person, the doctor can actually help patients.

There are different things where different people can help with them. But still it’s not only doctors. It’s like healthcare professionals. It’s a tough profession. 

Richard: So, so I guess that’s one.  The main question is we want to do AI, but we don’t know what model to do. And I think that the question is what’s going to be most useful to you and how does it fit in with your workflow is usually the answer, you know, what’s going to improve your pathologists [00:52:00] workflow.

What’s going to help them make a decision quicker and more easily. And I think those are the concepts that you. You need to look at what your pathologists are doing, how can we improve what they’re doing, the easiest, you know, maybe choose the low-hanging fruits first, don’t go to some mega model that’s highly complex, you know, choose things that are going to have the most impact and the most improvement.

And, you know, the other thing is, can you use this model, this works in a dog, can I use it in a cat? And you think, well, potentially, yes, you can. It just depends on how similar that lesion or that tissue is. So that kind of goes back to the transfer learning. If you’ve got a mitotine indicator that you’ve modeled for human pathology, there’s probably no reason why you can’t transfer that to an animal model.

Because mitotic figures, as long as the staining in the tissues are very similar, mitoses look like mitoses, don’t they, whether they’re in human [00:53:00] or animal tissues. 

Aleks: And the project described in the literature during, there have been check, they are always those  image analysis challenges, like the chameleon challenge that, was this, was detecting metastasis in lymph node.

There was also one that was called “My Dog” and it was mitosis detection. And they had a data set with both animal and human tissues, exactly for that reason, because mitosis is mitosis and it doesn’t really matter which cell,  it affects or which cell it is within, it looks similar. 

Richard: So, so yeah, that, they, they would probably be the, the biggest.  Biggest questions.  I guess the other questions you should be asking kind of just to reiterate… 

Aleks: So what do they ask instead 

Richard: What do I need to do before I start doing the AI so again? Just covering you need to get your you know your workflow [00:54:00] before you do the AI nail down, you need to get your quality control of your material good.

And if you can get it, everything’s nice and consistent. Then the performance of that model would be much better. And you won’t have to maybe train as much, spend so much time annotating that sort of thing. So that would be one.  And also the limitations of AI. I think everybody thinks it’s a magic bullet, but you know, there are times when you’re modeling and it you can’t quite get it to work well because of certain constraints.

So it’s not perfect. And some people go, Oh, can, can you make me a grader? And you think this is, pathologists can do this quite easily, but actually to implement that in AI is more of a challenge. It’s not as easy as you think. So, and again, some of these mega models that you’ve got so many criteria, it becomes more laborious and more difficult.

And, and I think some of these things is keep it simple, [00:55:00] stupid. You know, have, a workflow where you say, is this a tumor, is this not? And then you have a model that does tumor and does inflammation because you’ve separated the two rather than having it all built into one massive mega model.

That’s, you know, quite complex and difficult to maintain. You can have more of a workflow-based model. So you have a sequential model running rather than one big one, you know?  So I always think “Keeping It Simple, Stupid” is a good analogy, you know, in those sorts of things where you can sometimes overcomplicate things.

Trying to make it do the job of, you know, ten people, you know, that sort of thing.  so that, that would be the other one. And obviously things about people are very concerned about privacy and security. And I think the beauty about AI is that a lot of it is de-identified. I mean, you can’t identify someone, from a digitized image of a biopsy, you know [00:56:00]

If, if I took a piece of a biopsy from your skin and processed it and digitized it, I couldn’t tell it was yours because I don’t have the genetic data.  And if there’s no metadata in there, there is no identifiers and it’s de-identified tissue then, and then, you know, there’s no link is there. The only link is that identify, that links your limb system data to the image.

So in that sense, it’s quite.

Ownership and Privacy Issues

I would say it’s more secure than a slide that has could even have the barcode or the name on it or whatever, you know. So. I that’s one, I think privacy and security is always especially when it’s stored on the cloud and things like that It’s obviously one concern and it’s legitimate But as long as you’ve de-identified that then, you know, you’ve done all you can especially going back to your previous material on gdpr and things like that that sort [00:57:00] of ties in doesn’t it but you  and, and the other thing, the other topic that comes up is who owns WSI, because it’s a digital image.

It’s not tissue. It’s an image taken of the tissue. So I’m not sure if that’s really been clarified, particularly as like, you know, ownership of, of… 

Aleks: Yes, so in the US, it depends on the state. I don’t know the, what’s. situation you’re in veterinary diagnostic in your company. 

Richard: We, we have, it’s relatively unregulated. I think regulation is going to come from what I can see. But at the moment it’s quite, it’s unregulated. So I, it’s unclear as to ownership. of a digital image. I mean, the ownership of the tissue is another factor that we’ve been through [00:58:00] previously. It’s like, you know, there was, there was a,  issue quite a few years ago about post mortem materials. Obviously, that was reflected when we had problems with human material being stored and the owners weren’t, or to the owners, the, the family weren’t, you know, aware of pieces of tissue being stored for science and research.

And that changed quite a lot. And then we had a. That certainly bounced into the veterinary industry as well as to, you know, ownership of, of tissues and things like that. But then we’ve got the next factor is who owns the digital image or digital photograph of that tissue. That’s another, that’s another issue I suspect that’s, 

Aleks: Yeah, I think it’s going to be, there has been a debate. I think there’s going to be the more of these,  tools based on those images, is being implemented. And the bigger the necessity to clarify what [00:59:00] are the, legal and ethical,  guidelines for, for using them, right. But it’s in, in, it’s across the digital health spectrum across all things, digital health,  because data is generated digital data is generated that there was  not there before. Yeah. And so nobody would ask this question. Like you say, tissue is a physical thing that people already ask questions about. But, what about, what about the other things? Okay. So anything else that you would want to add, or anything that I didn’t ask, That I should have asked. I think I’ve probably waffled on for long enough.

Aleks: Covered, we covered, quite some topics.

Getting Started with AI Integration

So,  if the listeners want to start integrating AI image analysis [01:00:00] algorithms in their diagnostic workflows, what’s the best way to start? So,  what would you recommend? 

Richard: I think, yeah, look at, how is this going to integrate into your workflow, how AI digitized, because obviously it depends on that. So that would be the first step. The second step is,   do you think you can integrate with the company that,  you want to choose to use? And then how easy is, is that? Then obviously in house, you need to decide. You know, what’s going to be most useful to implement in an AI fashion.

So what, what models initially are, you know, not necessarily the, the stuff that’s really cool looking, what’s going to be most useful. What’s going to reduce your turnaround time, pathologist fatigue, more consistency,  [01:01:00] I don’t think it’s about replacing stuff, it’s more about helping or should be about helping staff do the more, perhaps more mundane things that they can do very easily.

And then the human is left to do the more difficult and very subjective things that require a much more informed decision and more experience-based. So, those are the things I would probably recommend to think about, you know, before, you approach AI,  also don’t be scared of AI, you know, it’s, it’s, it’s not a magic bullet, but it’s extremely useful and, you can do most things, can do most things, but yeah, it’s, I think it’s about working together with, with your company.

To develop a model that’s going to be most useful to you.

Conclusion and Contact Information

Aleks: And if anybody wants to see how the Aiforia platform works [01:02:00], what kind of models you can do in collaboration with them, or if you have the capacity to do it on your own, there is an option to do that as well. So I’m going to link to a form to contact the Aiforia team.

Richard, is it okay if I include your email as well in the show notes? Yeah. Okay. So if anybody wants to directly talk to Richard about it, there’s going to be his email in the show notes in the description, wherever you’re going to be consuming this podcast. Thank you so much for joining me, Richard. It was a pleasure to have you on the show.

Richard: That’s great to chat. 

Aleks: And I wish you a wonderful day. 

Richard: Thank you. 

Aleks: Thank you so much for staying till the end. It means you are a real digital pathology trailblazer. Aiforia has been with Digital Pathology Place since the very beginning. They have sponsored a lot of educational videos. I interviewed amazing guests who used this platform for really cutting-edge research, and I’m going to link to [01:03:00] all this content below. And if you would like to learn more about Aiforia and see if this is a good fit for your AI needs, be sure to fill the form below and then me or somebody from the Aiforia team is going to get in touch with you, walk you through the platform and check if this is the best way for you to leverage AI.

And I talk to you in the next episode.