Table 1.

Major themes and illustrative quotes identified from thematic analysis of transcripts of interviews with informaticians to explore the potential of AI for VTE prevention and management

ThemeExample
Perceived strengths of AI 
Reduces clinician burden “...it can be a labor-saving device. So, it can replace a lot of the time that physicians spend pouring through patient notes and radiologist reports and like medical guidelines...(AI) can do a lot of that work for you and help you narrow your focus to just the nuggets of information that are relevant to the task at hand.” (P7) 
Increases efficiency “Does it improve efficiency? Yes, because that person’s workload probably reduced from taking 1 minute to look at one image to like a matter of seconds...” (P5) 
Increases accuracy “We’ve used NLP…and I can say the accuracy is pretty good. We’re getting AUC’s around 0.95.” (P17)
“I think the new models have a very high performance in images, like convolutional networks.” (P14) 
Supports decision-making “I see potential use or usefulness for predicting or like telling you which kind of persons or patients are high risk, suffering VTE. There I see potential use and then perhaps also like checking orders. So, making sure that people who are at high risk have prophylaxis. Or perhaps patients who are low risk don’t get prophylaxis and vice versa” (P13) 
Makes medical information more accessible to patients “I know that patients also don’t like sometimes how their providers take a while to get back to them or they’re busy or they can’t seem to reach them. Having this, a synchronous tool that can be accessible whenever they’re ready or whenever they have a question” (P10) 
Perceived barriers to implementing AI 
Quality of training data “...if you put (bad) data in you get bad results out, so, the quality of the data set, which is very difficult in healthcare settings because all of the data is not structured” (P20) 
Ethical concerns “...you want to be fair, so you want to represent everyone. You know and otherwise you want to protect patient privacy. That’s very important, so, you don’t want any data to be leaked outside” (P2) 
Model inaccuracy “There are actually multiple examples now of what some of the people in the technology logs are calling ‘hallucinations’. It’s basically where the Chat GPT is making stuff up” (P15) 
Clinician unfamiliarity “If they’re not familiar, it can be very skeptical or also having a lot of unrealistic expectations (about AI)…” (P18) 
Patient preference for human communication over AI “(Patients may think) well, here’s one more thing that’s between me and my provider. I got to deal with the insurance. I got to deal with the phone menu, I got to deal with this, that, and the other. Now I can’t even talk to a human. They’re having me talk to some Chatbot.” (P9) 
ThemeExample
Perceived strengths of AI 
Reduces clinician burden “...it can be a labor-saving device. So, it can replace a lot of the time that physicians spend pouring through patient notes and radiologist reports and like medical guidelines...(AI) can do a lot of that work for you and help you narrow your focus to just the nuggets of information that are relevant to the task at hand.” (P7) 
Increases efficiency “Does it improve efficiency? Yes, because that person’s workload probably reduced from taking 1 minute to look at one image to like a matter of seconds...” (P5) 
Increases accuracy “We’ve used NLP…and I can say the accuracy is pretty good. We’re getting AUC’s around 0.95.” (P17)
“I think the new models have a very high performance in images, like convolutional networks.” (P14) 
Supports decision-making “I see potential use or usefulness for predicting or like telling you which kind of persons or patients are high risk, suffering VTE. There I see potential use and then perhaps also like checking orders. So, making sure that people who are at high risk have prophylaxis. Or perhaps patients who are low risk don’t get prophylaxis and vice versa” (P13) 
Makes medical information more accessible to patients “I know that patients also don’t like sometimes how their providers take a while to get back to them or they’re busy or they can’t seem to reach them. Having this, a synchronous tool that can be accessible whenever they’re ready or whenever they have a question” (P10) 
Perceived barriers to implementing AI 
Quality of training data “...if you put (bad) data in you get bad results out, so, the quality of the data set, which is very difficult in healthcare settings because all of the data is not structured” (P20) 
Ethical concerns “...you want to be fair, so you want to represent everyone. You know and otherwise you want to protect patient privacy. That’s very important, so, you don’t want any data to be leaked outside” (P2) 
Model inaccuracy “There are actually multiple examples now of what some of the people in the technology logs are calling ‘hallucinations’. It’s basically where the Chat GPT is making stuff up” (P15) 
Clinician unfamiliarity “If they’re not familiar, it can be very skeptical or also having a lot of unrealistic expectations (about AI)…” (P18) 
Patient preference for human communication over AI “(Patients may think) well, here’s one more thing that’s between me and my provider. I got to deal with the insurance. I got to deal with the phone menu, I got to deal with this, that, and the other. Now I can’t even talk to a human. They’re having me talk to some Chatbot.” (P9) 

AUC, area under the curve.

Close Modal

or Create an Account

Close Modal
Close Modal