What are all the regulators and policy makers actually trying to achieve when they’re doing this? Which is it more than technical? Is there an ethical and social aspect to this that they would like to address? Are they trying to build trust with the end user and the patients [00:24:00] who will benefit from this?
And so I think, I’m glad that the article turned out to be, you know, readable and usable. Because we really looked at regulations around the world and conveyed that theme. And Dr. Rashidi is right, we assembled a really amazing team of people who have years of experience in regulation. In the lab environment, you know, and LDTs and compliance, and FDA.
It actually came together quicker and easier than we had anticipated.
Aleks: Yeah, I was really impressed. Also, so the other ones, who was the main driver for the ethics- ethical considerations?
Hooman: Matthew. Matthew did that. Dr. Hanna.
Liron: Matthew is the most ethical among us, so that’s why we assigned.
Aleks: He got the task. I loved both. The bias was bias and ethics together. No.
Hooman: Yes. Right.
Aleks: Yeah, because [00:25:00] I think this is an interesting topic, not only because, you know, we wanna be ethical, but it has a like this. Common meaning that everybody understands and it has a very specific meaning in the medical field, and it’s actually a field of science.
The bioethics, biomedical ethics, and the same with biases. You’re like, you have the common meaning of the world. Oh, something is biased. Somebody is biased, but then you have this super specific, meaning significance of, okay, why isn’t our biased? What kind of sources of bias are contributing to that, and what it means.
And I also logged in that one that she always have like a mitigation strategy. Okay, this is where it comes from. This is what the risk is, and this is a mitigation strategy, [00:26:00] which I think is also one of the common threads in how regulators are looking at this topic. But Matt, tell me how did you, because I felt the transition was like it was clear.
How did you struggle with the, like these two, the common meaning and the scientific meaning? Did you. Like consciously transition from one another. And how was the experience of writing that one or leading that one?
Matthew: No it’s fantastic and I’m glad it came across clear. You know, I think ethics and bias in AI aren’t just theoretical issues.
They’re very real, especially in medicine. When we deploy these AI tools in healthcare, we’re not just automating tasks. They’re making decisions that directly affect people’s lives. So we sort of want to ask and sort of in writing the paper is wanting to ask. You know, how do we build a model?
What data is it trained on? How, where are ethics and bias? And more importantly, where might it fail? And so just [00:27:00] from the perspective of education, trying to put two and two together and saying, okay, well, let’s educate and say where are there bias? And then how can you mitigate it? And. Bias can often enter AI systems through historical data, as we mentioned in the article data that may affect different diagnosis rates or even how pathology slides are captured and labeled.
And I mean, it’s it could be pervasive through a lot of what we do. So if we don’t address those upfront, we risk really encoding all of those biases with within our data, which will affect all of the outputs. And so at CPAiCE and our broader work, we’re really focused on transparency, accountability, and inclusion.
And so we wanna make sure we understand how those models work who they may benefit or who they who they be at risk. And you know, just ethics isn’t an afterthought. It’s sort of part of the responsible AI development. And hopefully we included a lot of that in the paper from what people read.[00:28:00]
So we just wanna keep clinicians, patients, and these communities engaged in the conversation and educated about it because I think the trust in AI is earned, that we really shouldn’t assume it.
Aleks: Yeah. Keywords educate and the trust is earned. And I think this particular one, like de- demonized the bias and the like, through education through this particular month. You basically make people aware of how it works, and when you know how it works, especially in the aspects where people are most afraid of, because the thing people are most afraid ofis that it’s gonna hurt somebody and. It can hurt through, you know, one sided data, different… different options.
The one, you know, how it works, you know, how to mitigate the risks that come with the AI tools, like with any other tools, right? [00:29:00] So I think you, you very much achieved this call of education and earning trust in AI because now it’s so much clear how it works and where you need to pay attention. Fantastic.
I’m just impressed with this series. I love it. I’m like building a course on it as well and and spreading the word. So what’s next, guys? For each one of you, do you have like next publications you’re gonna write next tasks that you’re responsible for at CPAiCE or in general, like what do you wanna do next in your digital pathology?
And computational pathology life at UPMC and CPAiCE.
Hooman: So I’ll take that. So I I think that’s a really good question. And this is kind of in tune with what both Matthew and Liron had this, you know, had mentioned [00:30:00] earlier, which was we’ve built this as if you will, a building block of various things that we have within CPAiCE and, you know, University of Pittsburgh.
In terms of our mission, in terms of how do we incorporate computational pathology and AI within our framework? And so that actually expedites research innovation and education. So checks off the academic missions, like what Liron was saying. But but just as importantly, you know, our duty as practitioners and scientists to also propagate the knowledge to the next generation who’s gonna be affected by these technologies? So this series is just a building block of various other tools that we are building and coalitions that we’re building with other centers that enables us to pass [00:31:00] on the knowledge that we’re gaining from, you know.
The content, but also about center building that’s required for typical pathology departments to have. That’s one thing that I think all three of us are so a hundred percent aligned on, which is we feel a hundred percent strongly that the need for computational pathology and AI is not just, you know.
A nice want. It’s an absolute need. And the more and more these AI tools are being incorporated, the more different departments need to carve out resources to set up similar centers to what we’ve set up CPAiCE site. So this would be our way to leave a mark, hopefully to, you know, enable more pathology lab medicine departments to.