Aleksandra: [00:00:00] Is ChatGPT good or bad for pathology? Can it even be used in pathology? It doesn’t really have anything to do with images. How can we leverage it? These and other questions roll through the pathology community and I decided to research this topic and deliver the answers to you. This is a part of a webinar that I held on this topic and it’s gonna clarify a couple of things.
So let’s dive into it.
Welcome to the first lunch and learn hosted by me and the digital pathology place. And today’s topic is the chat GPT conundrum. Why do I say conundrum? Because as all [00:01:00] the AI applications, it’s being depicted as something super in a polarizing way that it’s a fantastic tool, or that it’s a villain that is stealing your data.
And this is a lot of how news are working, right? So for us as scientists, As people involved in, uh, medicine and drug development in digital pathology, we just need to be in the know to be able to make an informed decision. What is this? Is this too good? Bad? What can we use it for or not? Let me tell you what I want to cover today.
I hope I can cover everything. We’re going to do an intro to natural language processing overview of language learning models. LLMs. And then we’re going to talk about how they work, what the practical applications are, a little bit about current research and reviews, and a fun fact about this. When I was looking at the reference list of those papers that I looked up for you, they’re a lot shorter than the reference list that we’re used to.
So that is good. I mean, good, meaning people are [00:02:00] starting to write about it, but there is not too much research yet. We’re going to, of course, talk about the concerns and ethical considerations. What is there that we should be afraid of? Or maybe we don’t have to be afraid of anything? Or what is it? And what’s going to be the future of large language models in medicine and pathology specifically?
There’s going to be some homework and there’s going to be a special surprise for you at the end. And those who are on my email list. for some time. They might know what that surprise is. And we have a special guest. Richard is here and he is co author of one of the papers that we’re going to be mentioning.
So this is fantastic. So why did I even decided to pick this topic where everybody’s talking about it? But my sister sent me an Instagram message and is Instagram Not where we all go for our scientific information. No, probably some of us go to TikTok. More dynamic, right? Of course, I’m kidding. But when my sister sends me something, I pay attention and she sends me this Instagram post.
Patient, look [00:03:00] for a message from my wife. saying that she had changed her mind and that she was coming back. A. I. To see her for some reason, I thought she would come to me and say she misses me. And I’m like, what? So it turns out there’s more to this post. And it’s actually based on a publication in the publication is Semantic Reconstruction of Conscious Language from Non Invasive brain recordings in nature neuroscience.
So what happened in that publication was that the scientists did so called fMRI, functional MRI, which is a non invasive method of MRI. And they played podcasts to people. So they use this fMRI and they were mapping the fMRI images from brain regions to the text or to the speech or to the podcasts that those people were listening to.
And then Later, they were so they were training, they were making pairs brain images and the text that the people were listening to [00:04:00] matching them and later predicting from the brain images what the text would be that they would were thinking or would say. So here’s this example that the patient this is what they were listening to an AI said something pretty similar out of context.
So yeah, that’s what happened. My sister told me that and I’m like, Oh my goodness, AI is gonna read our minds, but it is person specific. So it’s not generalizable. This one for at least it was trained per person, so it cannot really like read everybody’s mind. So this is where it was published on May two thousand twenty third and most of our publications today are pretty recent.
So that’s good. And that was called semantic reconstructions of continuous language from noninvasive. brain recordings. For those who don’t know me yet, just a quick intro to myself. I’m Alexandra Zhurav. I’m a veterinary pathologist, did my PhD in Germany, and I’m a board certified pathologist for the American College of Veterinary Pathologists.
I have a deck over a [00:05:00] decade of digital pathology experience started in residency working with digital slides, and then it took various forms. I started speaking at conferences and giving different webinars about this topic in 2019, which coincided when I started my blog. So I was a speaker at a European Society of Toxicology Pathology Congress, STP Society of Toxicology Pathology, American College of Veterinary Pathologists, I gave webinars for the Digital Pathology Associations.
And recently I spoke in New York at the Digital Pathology and AI Conference. organized by Global Engage, and I am the CEO. And if you’re on my email list, you will know that that means Chief Digital Pathology Trailblazer at Digital Pathology Place, which is my company, my platform to spread digital pathology information.
And I am an online course creator, digital pathology podcast host, and I’m super proud to say that this podcast is in the top 10 of pathology podcasts. And yes, there are more than 10 of those podcasts, so [00:06:00] it is an achievement. I’m a YouTuber, blogger, scientific influencer. I like to call myself that way.
And I’m a veterinary pathologist. So, let’s talk about natural language processing. So natural language processing in pathology, we’re more used to AI, artificial intelligence connected to images. So we are more familiar with the concepts of computer vision with the algorithms that were approved for HER2 scoring for developed for PD L1 scoring and scoring of different IHC markers.
Recently, there was a 510K clearance for an image analysis model for prostate cancer filed. This 510K was filed by PAGE AI. So we are more familiar with the image analysis. side of things. This is not new to us, but natural language processing is something that is not really newer in science, but newer to pathologists.
We were not exposed to this part of artificial intelligence. [00:07:00] So let me give you a quick overview. What that is. It is a branch of AI that empowers computer to understand. So, instead of interpreting images, it’s doing the same thing with language, trying to interpret how we speak, how we write. And it’s always this interesting translation, also from human vision to computer vision translation.
We realize, okay, we’re not dealing with, abstract concept that we can generate in our brain, we are dealing with pixels. Here in this case, we are dealing with words or tokens. We’re going to talk about tokens later. So where can it be applied? This natural language processing, what are the components? It’s computational linguistic machine learning, and this is used to analyze and derive Meaning from text or speech.
So let’s pay attention to the words here again, analyze and derive meaning. Not really understand. It’s not abstract concepts. Again, [00:08:00] it’s mathematical operations. And we’re going to talk a little bit, what kind of operations these are. Computers don’t understand. They just mimic. They learn patterns and they mimic.
So what can we use it for? For syntax analysis, understanding the arrangement of words in sentences and their relationships between them. So arrangement and relationships. Again, not meaning. Semantic analysis. Well, interpreting the meaning of sentences, understanding the context, and decoding the inferred meanings.
This has something to do with sequence to sequence training and translation, and not really word by word. We can do pragmatic analysis, understanding dialogue, considering real world knowledge, and identifying intended effect of language. Discourse analysis. These are dialogues. So larger text structures like paragraphs, connections, dialogues, and speech recognition.
So Siri, Alexa, and all the other devices that we talked to. This is speech recognition. And this is where NLP can be used. And [00:09:00] what are the applications, the day to day applications? Google Translate. This is my, one of the ones where, that I started with. Google Translate. So my personal story with translation is after vet school, I was fascinated with veterinary conferences and there were always foreign speakers coming.
And in Poland, at least the older, older generation of veterinarians that would come to the conferences, they didn’t speak fluent English, German, or I don’t know, I was. I think I was translating Spanish as well. And then from those conferences, I started also getting jobs from publishers like Elsevier to translate books from English or German into Polish to have it published in Polish, right?
That was 2009 machine translation. I could translate from Polish to English. But if I tried doing this from English to Polish, like the sentence structure would be totally butchered. We have seven cases, singular and plural in Polish. So like nothing would match. You cannot just like the word by word. And then at some point [00:10:00] recently, I had to quickly translate something and I just fed English into Polish and it was so much better.
So machine translation. I’m going to tell you later how that happened. Became so much better. So anyway, this machine translation something that I am personally associated to, used to do than Grammarly. I hope I will not show you too many typos in this presentation. I have Grammarly installed on my computer for spell check and grammar.
I’m not dyslexic or anything like that, but I’m not a native English speaker, so I very much use Grammarly. for spellcheck and grammar check, chatbots, which is the protagonist of our presentation today, uh, virtual assistants and Alexa or Siri or other, other apps where you can give voice commands and something called sentiment analysis.
So analyzing texts, this is very much used in a customer satisfaction evaluation. You analyze or the model, the chatbot, the NLP model analyzes. the text and derives whether the customers were [00:11:00] satisfied, dissatisfied, or what they thought about stuff. But basically what the sentiment is and how they interact with the brand and if they like it or they don’t like it.
So about what those language models are. They are deep learning models and a little quick recap about deep learning. So AI is the main part of computer science where we have machine learning and a part of machine learning is deep learning. So those large language models. Like for image analysis, we’re familiar with deep learning with convolutional neural networks, large language models are also part of deep learning, just not for image analysis, but for language processing.
They intersect with generative AI. Generative AI is also a subset of deep learning. Generative AI means it generate something. So what can it generate to new content? It can be text, it can be images, it can be audio. And all this together is called synthetic data. Generative AI synthesizes new data. And [00:12:00] those large language models are large and they are trained on a lot of data.
So petabytes of data to solve common language problems. And I had to look up petabytes again because I mean, I know what it is, a petabyte, but I need like of visualization. How much is that? How big is that? So that would be the amount of data would be like two and a half years of nonstop streaming, streaming 4k movies.
So not HD movies anymore, 4k movies. So they can be used for text classification, question answering, document summarization, and text generation. And those models they’re trained on like image analysis models, the pre trained models. they are for image analysis, right? They are trained on natural images.
What are natural images like dogs, cats, like pictures that we take, right? Not a specific pathology slide. So those models can be adjusted to specific domains. [00:13:00] And the thing with the difference between image analysis trained on natural images and image analysis applied to pathology is that there is very little in common.
Whereas in and language models, we’re using the same building blocks, right? In image analysis, yes, we are using pixels, but the things that the pixels create in pathology images are totally different from what we see in natural images. Whereas here, we use the same words. If it’s English, it’s English, right?
90 percent of those words are gonna be the same as other people use. And then there’s going to be pathology, jargon, pathology, uh, specific terminology, and those models can be adjusted to those fields with relatively less effort than image analysis. If you’re working in the image analysis space and you know how much time it takes.
to train those models. We like, and it’s not even training from scratch. We all use pre trained networks for this. And then it still takes so much time to adjust those models to pathology images. And yet, you know, [00:14:00] we just have a handful that are working. So language models have, I don’t want to say better potential, but faster potential.
So characteristics of those models are They’re large. They’re LLMs and they’re large language models trained on petabytes. So here, a petabyte, two and a half years of 4k movies. This is huge amount of data, even more data than we are used to in pathology images. and their general purpose. So, like I mentioned, due to commonality of human language, so go grammar!
Because we have grammar, because we have rules, because we have universal, I mean, they differ per language, but they are very common. for language groups, and we have only a limited number of languages. So this is so cool that we actually have like grammar rules that can be used to derive logic from the words and require a lot of resources of training.
Such a large model requires a lot of [00:15:00] resources. This is why those institutions like OpenAI, which is the company that ChatGPT comes from, they train them to. be general purpose so that they can later give it, sell it, or just make it available to people, to institutions, to somebody who can later use them with a little adjustment.
Because they require a lot of resources, they’re being trained. in a non niche way. They are pre trained and fine tuned. Pre trained is this general purpose part and fine tuned later, uh, they can be fine tuned with a lot smaller amounts of data. So a pre trained model doesn’t need to use petabytes of pathology data to be fine tuned.
to pathology, which is a super cool advantage because that gives like we don’t have so much data, even if we wanted to, there’s less pathology report than other text in the world. So how do they work? This is the thing they do not understand. So how do [00:16:00] they work, they can predict the next word or sub word.
And here comes the word token, which is like part of language. Usually those tokens are words, but they can also be larger. They can be sentences. They can be also sub words, like parts of the word. So working with those tokens makes this model derive logic from a lot of structured data it was trained on.
So from a lot of a lot of text without knowing the meaning. So they can recognize patterns, not meaning. I’ve mentioned that several times. And the architecture that Revl now illusionized the large language model is a transformer. Transformer is something that transforms something. Into something else. So for example, text in French into text in English.
I’m gonna show you the arch, like a super simplified view of the architecture. But for those who are familiar with image analysis, you know that the architecture that revolutionized image analysis and the was [00:17:00] outperforming everything else during the chameleon challenge in 2017 was a convolutional.
neural network. So this was like the buzzword, the keyword for image analysis. Here is the transformer. Transformer is the architecture that revolutionized large language models. So it doesn’t do word by word anymore. The predecessors, the ones that were before, the models that were before predecessors, I think RNNs, recurrent neural networks and long, short memory, long short memory models.
They were working with single words. And the problem was when the text was too long, it would forget what was at the beginning of the text. And here we’re working sequence to sequence, and we are not working on single words. We’re working with Tension. Tension to what is relevant in the sentence, rather than deciphering it word by word.
And this attention gives context. And in the example of, uh, translation, often [00:18:00] the word order is reversed. in English and in Polish or in English and French and in German as well. So the order of the words doesn’t match, but the meaning when you work with sequences, it matches. And then the model pays attention to something that is relevant in the sentence to translate.
Or do other things. And this is the super, super simplified architecture of a transformer. And my example, this is not the only example, by no means is this the only example, but we have an input in one language. These are actually two networks. So encoder and decoder are two separate networks. Input encoder encodes, gives an output, there is an internal state, things happening in between those networks.
Then a decoder decodes and gives out. But let’s say in another language. So you’re going to come across even, you know, there are more complicated versions of this, but basically input something and codes stuff happens in the middle, then there’s a decoder and an output that’s a transformer. So [00:19:00] what about our friend chat GPT?
Chat GPT is a chat bot. It is created by OpenAI. OpenAI is a company that trained this model and decided that that’s. That’s what they’re going to do. They recently got 11 billion in funding from investors to keep developing ChatGPT. So they did a really good job with this one. And ChatGPT stands for, and I asked ChatGPT what ChatGPT is, but ChatGPT stands for generative.
We talked generative AI. pre trained, we talked that they have to be pre trained transformer, which is the architecture. So GPT, this is what it stands for. Let me show you what chat said about itself. And that it is an AI powered language model developed by OpenAI. And this is the architecture chat GPT pre trained transformer, chat GPT 3.
5. Now we have 4, which is even better. And it can be used for a variety of applications, such as answering questions, providing explanations. [00:20:00] generating creative text and assisting users in various tasks. And several of us might have received a communication from their employer or seen in the news that some entire countries would ban chat GPT like Italy, but they unbanned now corporations either ban it or embrace it.
So my sister works for Coca Cola and she told me that they use chat GPT all the time. And I checked if it’s publicly available information. It is, so she was not revealing anything sensitive, and I told you when my sister tells me something, I listen, so I start paying attention. So yeah, why do some embrace, why do some ban?
Well, it depends what kind of environment you’re working, what kind of regulations and everything, but Let’s talk about some myths and facts. So chat GPT is accurate and reliable. I don’t think it’s a myth. It’s a lie. It’s a flat out lie. And we all know this was like the first thing that, uh, no, we need to fact check.
So it doesn’t understand meaning. And this chat [00:21:00] GPT 3. 5 is only trained on data until 2021. So it’s not going to tell you anything accurate about anything after that, because it was not trained on this data. And it’s going to derive some stuff. stories from what it was trained on. So like, you know, paragraphs.
One time I was writing an outline for my book that’s going to come out soon, but it gave me case studies as in the outline and I’m like, Oh, can you elaborate on those case studies? And it elaborates on two case studies. And I’m like, and my next question is, are those case studies real? And Chad GPT says, no, they are fabricated or anyway.
So basically the more you use it, the more you know where you need to fact check it, like with any. source of information. ChatGPT has access to real time or personal data. So there are two sides to this. The model itself, after training, is frozen. What does that mean? It means that it cannot update itself anymore on its own.
But the information in the chat, [00:22:00] in this chat window itself, is stored. And when you are opening an account with OpenAI, and this is one of the reasons why companies do not let employees interact with this model via web browser because this is an interface, an app provided by a company. And this company says, Oh yes, the data in the chat is stored and can be used for training and for optimizing the model.
So while you are doing your chats, The model itself is not being updated, but the data that is stored in your chat history, OpenAI has access to it, so the developers, the AI trainers can take it and train the model on. And there were several cases where sensitive information were pasted into the chat. So I was doing some research yesterday for you.
Employees of Samsung would paste very sensitive data. We already said Italy banned it because of privacy information. Basically, do not reveal your secrets to ChatGPT [00:23:00] because they are being stored there in this chat. Even though the model is not updating itself real time. ChatGPT has opinions. It doesn’t have opinions, but it sure shows bias.
It is due to the bias present in the training data. So it shows bias. The models in all AI, not only language models or image analysis, they are only as good as the data they’re trained on. And somebody picks this data for training. So it’s being said that, oh, it’s trained on the whole content of internet.
It’s not. It’s trained on petabytes of data, and somebody chose those petabytes of data. And one of the training data was Wikipedia, some blog pages. I think it was initially being trained on Twitter posts, Twitter tweets, but Elon Musk said, no way. If you want Twitter tweets, then you have to pay for it. So that was banned, even though he was one of the co founders of OpenAI.
Anyway, it has bias because data that went into it was biased. What can we do about it? feed unbiased data in the [00:24:00] future. And here’s important slide chat GPT. Even though it’s going to answer you a lot of questions correctly, it is not Google. What do I mean by that? It does not retrieve the information from its source.
When you’re asking, it is trained on all the sentences it has. ever seen. And the question prompts it to put together a cohesive response. And if the question is specific enough, it prompts it to give you specific enough answer based on the patterns that it has seen, which is. amazing, but it is not access to factual information.
So that kind of explains, uh, shall you trust chat GPT with detailed information? No, it’s not accessing stuff in real time. It’s not a search engine. Google is, and then you access information. Inside of the source of this information when it ranks well on the Google page, but basically, even though there’s a lot of overlap and sometimes it’s easier to ask chat GPT for easy [00:25:00] stuff than go and Google and sort through different sources, it’s not the same and doesn’t work the same.
There’s a lot of overlap, but it’s not the same. What happened here is chat. GPT was asked to pass the medical exam, U-S-M-L-E, and did it pass? Let’s see. Well, it kind of did. It depends, because there were open ended questions and multiple choice single answer questions. So here is a little bit of the distribution that we are talking about.
Open ended. Open ended are easier for ChatGPT because it can take out the dependencies of different tokens and put together a coherent answer that usually… Most of the time is accurate. And here we can see the dark blue is accurate. The USMLE has three parts. And this was the performance for each of the parts.
The first part, not so good. But second and third, pretty good. And here in the multiple choice single answer, it was, so here, inaccurate, inaccurate, or indetermined.[00:26:00]
Uh, it could do better, but it’s like, for a non person trained on general non medical data, I’d say it’s pretty good. Pretty well. And there was another… from a different publication that was assessing applicability of chat GPT and assisting to solve higher order problems in pathology. I’m like, okay, what are higher orders?
What do you mean by higher orders? And how did you assess it? And they had a question bank. This This department of pathology, and they selected 100 questions of the pathology department question, and then they had a conversation with Chad GPD, and there were different responses and answers, and they evaluated on a scale of zero to five, and also in a categorical evaluation.
So two different. And so I was like, okay, what are higher order questions? What did you ask this chat, GPT? So for example, they would ask, explain why fine needle aspiration cytology examination of thyroid may not be useful in diagnosing many of the thyroid lesions. [00:27:00] And I can confirm that one. It’s not that useful because I had thyroid cancer and I don’t know how many fine needle biopsies they did before they said, Oh, it’s not diagnostic, but at least like three or four anyway.
And chat GPT says, Oh, sampling error or limited diagnostic yield. So that’s fantastic, right? What else did they ask the chat? Explain why transfusion related disease. are avoidable? And Chad says screening of donors and testing of donated blood. So it can answer those questions pretty nicely. So there are a couple of other publications that talk about where it can be applied.
And most of them has in medical journals and in pathology journals has the word potential application because we are not applying it yet. And we’re going to talk about it in a second, but potential applications. So if anybody after this presentation wants to write a paper for their particular journal, like me would be veterinary pathology or something, potential [00:28:00] applications of chat GPT.
It’s a good time because as I told you, the reference lists for those papers are short. We can still be the first to talk about chat GPT in our domain. And there was another paper that it’s not really a publication, this authoria. But here, one of the authors, the first author was the chatbot. And so I read this abstract and like, okay, possible benefits of challenges and pitfalls of chat GPT.
Right. And then I was looking for the rest of the paper. And the rest of the paper is just The screenshot of chat answers, so it basically, uh, they prompted it with this very title, the possible benefits. And this is what chat gave them. And this was the content of the paper. So yeah, I guess in response to this, I don’t know how reliable this source is, but I found it in, uh, many other publications is that a science journal.
banned listing ChatGPT as co authors on papers, and science actually banned using it, and [00:29:00] nature didn’t ban using it. Yes, and there’s a question if there’s going to be a link to those publications. I’m gonna later put them all together. I don’t have a reference page, my bad, but I’m gonna do this later for you and put the links to all of those publications.
So, anyway, science banned, nature said not to put it as an author, and some publishers are like, don’t care, and some say, oh, you have to indicate which content was generated by it. So, it basically is up to us to look what the journal tells us. So, of course, there are concerns and ethical considerations. And, uh, here, our guest, Richard, most of the papers that I’m showing you are open source, but Richard sent me a copy of this paper.
And this is not language model specific, but it’s in general AI and pathology, what could possibly go wrong? And they have a super cool system. They have a table that explains the challenges, then the impact of the challenge [00:30:00] and the mitigation strategy. So a risk based approach to what could possibly go wrong.
And chatGPT is one of the examples there. But this also counts for image analysis. So what are the sources of errors? For example, data, and this is only in the challenge column. I didn’t put the solutions here, but data, what can happen in pathology for image analysis, slight preparation can be off. Imaging can be off.
Not many centers are not doing digital pathology, limited diversity. So all of this can go wrong. What will be the impact? You have to ask yourself. And this is, I come from a good laboratory practice environment where this is the approach to validation. You do assess risks. because some things you cannot solve.
You have to assess the risk and see what your mitigation strategy is going to be. But in data, all this can go wrong. Model development. How do you assess if this model is okay? And what is the interpretability? So here comes the explainable AI. question. Why does the model say something that it’s saying?
There are different [00:31:00] methods to figure it out. But the same question comes when we are using language models. Why is it giving this answer? When it’s a correct answer, we usually ask ourselves less why, but if it’s an incorrect, or if it’s something that can be detrimental, potentially when you apply it.
then we need to know. We have to know. And then what about model deployment? What are the protocols? Are there protocols? Well, not necessarily. What about the computing resources? Can anybody do this? What about reimbursement in medicine? Are damage analysis algorithms going to be reimbursed? How?
Digitization? How? There are some codes for reimbursement, but No money following the codes yet. Uh, we’re in the exploratory part of this, but because there can be significant impact and we need to know medication strategies and there are significant challenges, there is an imperative for regulatory oversight.
And here we’re going back to large language monitors or generative AI in general, in generative in general, in healthcare. And here the [00:32:00] authors are a fellow YouTuber, Bertalan Mesko, he is an MD from Hungary and Eric Topol, who’s the author of the book, Deep Medicine. I highly recommend reading this one.
And they have, I don’t want to say online presence, but a big presence in the digital health space. Analyzing Eric Topol has like some research institute and Bertalan also has his own research institute. He’s a both of them probably are professors, but. Don’t have those data analytics companies. Richard says that his paper ends with a poem.
I didn’t show the poem. But basically, the poem is about how to be cautious with AI in the style of Alan Edgar Poe. So you can say, you can tell Chad GPT to write something in the style of someone and if it was trained on their data, it can generate something in the style of this person. So that was the highlight of Richard’s paper right here.
Those two. medical futurists wrote the paper about that it has to be regulated. Yes, it [00:33:00] does have to be regulated. How fast will the regulators be? It’s always a funny situation. And anybody working in digital pathology knows how much back and forth it took to get the buy in even for digital reads and now buy in and clearance for image analysis algorithm and things like that.
And it’s not A blanket approval. It’s like case by case. So yeah, there will need to be regulations and the concerns are data, privacy and security, accuracy and reliability, bias, transparency and explainability, accountability. These are the main concerns because When we think about the future of large language models in pathology, where can it be used?
And this is from Barton’s and Eric’s paper. For medical professionals, and it doesn’t differ, oh, they didn’t put pathology, only radiology interpretation. I think they’re biased. I think they are biased. I think Eric is a radiologist. But anyway, there is a division for medical professionals and for [00:34:00] patients.
There’s going to be a lot. I mean, everybody knows Dr. Google, right? Everybody Googles their symptoms. This is going to be the next level of Dr. Google. So it’s going to have even more impact. So clinical documentation, creating discharge summaries. So these are like easy tasks for this model, generating clinical notes from dictating.
Like I do this a lot. for my creative of video content, I dictate some idea and then ask chat GPT to structure it to add some missing points, right? So for the model, it’s easy insurance pre authorization, okay, money and health decisions based on the AI always raises an alarm bell, summarizing research papers, radiology interpretation, suggesting treatment option, designing treatment plans, diagnostic.
assistance and medical triage. So I don’t know if you know about the story of IBM Watson. This was not really a model. That was early 2000 or 1996. It won Jeopardy. It was like a tech, also language [00:35:00] processing program that was supposed to pull out stuff from literature and be of assistance to medicine.
Like with all those applications here, but it was just an inferior model, this chat GPT and the followers are superior, but still IBM Watson, it has a bug. And it was deployed at some point in a hospital. And the bug would recommend palliative treatment instead of aggressive treatment, right? So like, super thing that is super, in fact, for, for patient health because of inaccuracies in the tool that people were relying on.
So here we have the same problems, super great potential, but we have to be super cautious. And for patients, this can help a lot as well. Analyzing lab results, disease description, interpreting physician notes, health recommendations, symptom assessment, Dr. Google, analyzing data from watches and from wearables and many different things.
So these are the applications that are like, I don’t see this being super specific to medicine. [00:36:00] It’s just leveraging the technology. But we as professionals who work in an environment that has to be HIPAA compliant in the U S or GDPR compliant in Europe and in general data privacy. It’s super, super important.
We could not take the use of those tools lightly. For anything that has to do with patients, for us in general, for like helping us in tasks that do not deal with any sensitive data, I like the tool a lot and you have now enough information to know. how to use it. And before we go, I want to give you a homework.
Have you already used chat GPT? Because the homework is to go to the OpenAI website and sign up for chat GPT. You will be able, then you will be taken to this OpenAI page and you can sign up for chat GPT. This is a different one for generating images. And this is for developers who want to develop on chat GPT.
So the homework is to go and to register for it and play with it and see what it can do. for you. [00:37:00] You create an account and see what it can help you with. Very helpful for literature research, for organizing thoughts, for outlines of presentations. This is what I use it for. And with that, thank you so much.
And I hope this was a useful learn to learn. There’s going to be more. So stay tuned. Thank you so much for staying till the end. This was not the full webinar, so a couple of questions might still need answering and I want to give you the chance to access the full recording. The full recording is in our membership platform, Digital Pathology Club.
What is Digital Pathology Club? It is a library of all our courses. Everything that Digital Pathology took place and myself have created, everything is there in our membership site, including the full webinar as part of the AI in Pathology course. And we are building a fantastic community. In there as well.
So if you want to be part of it I would love to [00:38:00] give you a sneak peek into what it is and offer you a free trial for our membership site So if you want to see how it is to be part of the digital pathology club Go ahead. Click the link below and grab your free trial and I see you in the next episode