Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI in Periodontal Education with Dr. Saynur Vardar  image

AI in Periodontal Education with Dr. Saynur Vardar

S1 E3 ยท Probing Perio
Avatar
205 Plays2 months ago

In this episode, Dr. Effie Ioannidou, Editor-in-Chief, speaks with Dr. Saynur Vardar on the use of artificial intelligence in dental education. How are dental residents using AI? How accurate is the output of AI generators? How can we adopt AI use while also maintaining academic integrity? Listen in to hear the full discussion.

Read the full article here. https://aap.onlinelibrary.wiley.com/doi/10.1002/JPER.23-0514

This podcast is produced by the American Academy of Periodontology (AAP). To learn more visit perio.org

The views expressed in this podcast episode are those of the participants and not necessarily those of the AAP.

Transcript

Introduction to AI and Periodontal Education

00:00:00
Speaker
Whether you're in training, in practice, or in research, the Journal of Periodontology and Clinical Advances in Periodontics have something new for you.

Podcast Purpose and AI's Role in Periodontology

00:00:35
Speaker
Hello, everybody. Today we are here ah discussing how artificial intelligence is used in dental education, particularly in periodontal education. So hello, everybody. I'm Dr. Efio Anidou, the editor-in-chief of the Journal of Periodontology and the Clinical Advances in Periodontics. I'm here with you at Probing Period.
00:00:57
Speaker
this new podcast that explores and dives into the clinical and translational work periodontology in periodontology and implant dentistry. So how successful is AI's use in periodontal education? What does it offer? Today, we are here to discuss one of the Journal of Periodontology's papers on artificial intelligence in dental education. It's at GPT's performance on the periodontics ah in-service examination published in August of 2024. And we are going to kick off today's podcast with a special guest and a good friend and a senior study author, Dr.

Dr. Vardar's Background and Health Connection

00:01:40
Speaker
Sainur Vardar.
00:01:42
Speaker
So Sainour, welcome. Welcome to ProvingPareo and tell us a little bit about yourself and tell us a little bit about the work you do, that amazing work you do at Nova. I was there visiting a few months ago and I loved it. Yeah. Yeah. Hi, Appi. It's so nice to be here with you. It's just so much fun. When Mary Rose reached out to me, I was like, it's so fun. Let's do it.
00:02:07
Speaker
Thank you for having me. And ah yeah, you're such a great friends and our leaders, especially in Theodontology. So ah at Nova Southeastern University, I'm the chair of the Department of Theodontology at Nova Southeastern University in Fort Lauderdale, Florida.

Integrative Approach to Treating Periodontitis

00:02:24
Speaker
As you visited us a few months ago, it was such an honor to have you. and All my residents were so excited. I'm like, JOP's editor-in-chief this year, so it was so nice.
00:02:35
Speaker
um in and i My focus and my work is mostly on restoring oral health

AI in Dental Risk Assessment Projects

00:02:43
Speaker
from a full body approach.
00:02:47
Speaker
Because I'm really interested in looking at the person who has periodontitis, who is the patient, why this patient showed up with a disease in the mouth, what's happening with this person. So I trained at Institute for Functional Medicine.
00:03:06
Speaker
where they look at the chronic diseases from a perspective of what's the root cause of the disease, and they look at the whole systems in the body when treating one single disease. And and I learned that chronic diseases are lifestyle diseases.

AI's Educational Enhancements

00:03:23
Speaker
And it may manifest like periodontitis being one of the most common diseases in the mouth, but all other systems are affected because it's an inflammatory disease in the end.
00:03:36
Speaker
So the way we approach periodontitis is not just locally to treat the disease in the mouth, but look at the person overall and say that, okay, we need to restore health by restoring microbiome, by restoring God's health.
00:03:54
Speaker
by restoring maybe nutritional deficiencies, replacing the deficiencies, and also um looking at maybe what is what is this in this person's life going on so we can actually help restore health instead of just focusing the disease. So that's my passion in my practice as well as in my studies, to really connect them out with the overall health and restore health.
00:04:19
Speaker
So coming to AI, it's very interesting.

Ethical and Misinformation Challenges of AI

00:04:23
Speaker
no But I will stop you right there because I think that that is so important. And maybe we should have you again in another podcast to talk about the orl um and the oral connections with systemic health and the well-being. And I think what you touched upon so many ah factors that can play together into the presentation of the oral diseases, periodontitis, one of them.
00:04:48
Speaker
And so so many great points you made. I don't want to shift the weight from what we discussed today, but you really I loved your introduction and we should definitely have you back to to discuss this the the approach that you have and you developed at Nova.
00:05:04
Speaker
But yes, ah going to AI now, right? Exactly. So the AI, you know how it came up about two years ago, I'm doing MBA right now. And I was focusing on a like a startup project. And I, and we were working with one of my residents, actually, who is the one of the author series, Arsalan Dhanesh, brilliant, brilliant, period resident.
00:05:26
Speaker
on how can we use AI to ah create risk assessment tools for periodontitis and how can we actually make this easy to get to the patients maybe by creating an application or so. That was our project. And then one day he comes up and he has a brother who was studying the AT exam very smart smart ah really people and then JetChat GPT came around that time and then chat GPT all the students were using it for studying any exams and there is a lot of discussions is it safe is it cheating what is going on but I'm one of those people like
00:06:07
Speaker
I get excited about these things like what is the opportunity

AI as a Creative Partner in Learning

00:06:11
Speaker
here before I i look at the risks of course we need to know the limitations and the risks of it but also it's an excitement it's a new tool so then with our salon we decided okay let's test it Instead of having the fear of, oh, it's inaccurate, what does it do? Let's test it. And how are we going to test it? The easiest, best way was the period you know AAP in-service exam questions, 2023. So that's how we tested it. It's a great idea. And i and i you know I wanted to talk a little bit more about this, and I really um
00:06:48
Speaker
I really, i'm um I'm with you. I think that we should not have the fear of the unknown. I think it's important to make sure that we embrace opportunities and what artificial intelligence offers to us, right? um So I know that in the paper, and i it was a very interesting paper, and i I read it again a few days ago just to refresh my my memory. And I know that you are ah assessing the performance of well the chat What is it? Sad, generative, pre-trained transformer, right? what Which I learned about this in your paper, what exactly the means.
00:07:32
Speaker
But anyway, so you tested the performance of the 3.5 and the 4, and you compare their performance. And it seems to me that from from ah the paper, and correct me if I'm wrong, that the 4 seems to be a little bit has more an improved performance, right? yeah But both of them, both the 3.5 and the 4 have some limitations in some areas of the in-service exam. Tell us a little bit about this and then specifically about the limitations. What you what you find what did you find?
00:08:01
Speaker
Yes, so the the it's correct. Like when its first version came out publicly open, it was chat GPT 3.5. And then very soon after that, like a few months later, chat GPT 4 first version came out. And and in you know, just the, um it's improving, it's calm. constantly, constantly evolving. Now, since this paper was published, we actually had a new version. We tested it. It's just the incredible improvement is coming with every new version is coming. And

Transparency and Policy in AI Academic Use

00:08:34
Speaker
there are reasons for that. And I can explain the reasons why.
00:08:38
Speaker
But what we saw, what chat GPT was struggling the most was diagnosis, etiology, treatment planning. Those are the things in the first versions. I think it's very understandable because those are those need more critical thinking and complexity, right? So chat GPT was very accurate. It was up to like 90 to 100% when it comes to biochemistry.
00:09:04
Speaker
you know So it's very clear to find that answer easy because it's clear knowledge. But when it comes to diagnosis, when it comes to treatment plant etiology, it needs to put a lot of variables together. However, to our surprise, Effie, the etiology was 33% in the first version.
00:09:27
Speaker
and then it really improved like double, 75 percent in the first four ah version. yeah and and We checked it again with the version which was updated actually around December 2023. It went up to 80, 90 percent. okay so with ga yeah they are there so This means that what that they the this the the algorithms are more more improved or actually the data that is fed into yeah yeah well well because the Because in the first version, data was just the September 2021 up to September 2021. And then when you put 2023 questions, of course, it's missing a lot of information. The other new feature came, which I was really happy to see, and which came around, I think, um November, December, around that time,
00:10:24
Speaker
Now it actually opens itself to our external sources. What does it mean? You can ask chat GPT now for latest versions. And you can say that make your answer based on scholarly articles indexed in PubMed.
00:10:44
Speaker
Oh, wow. Meaning that it goes and searches the Padme and gives it article says Vada

Podcast Conclusion and Future of AI in Healthcare

00:10:52
Speaker
et al. 2021 found that boom, boom, boom, boom, boom.
00:10:56
Speaker
And this is the answer based on this information. So what happened with the developments, it opened itself not only its own knowledge, but also external sources like peer reviewed articles and journals. Yeah, getting getting better because it's more accurate. This is great. This is good to know, you know, you remember back in the day where um back in the 90s that we had this the grim book of all the summaries of like the classic literary view in Perio, right? Like this is like published by the AEP like I think at mid-90s. I mean this is an amazing help for ah ah residents when they were or they want to organize their you know literary view summaries or um
00:11:46
Speaker
you know, by theme or something. And I'm sure that we, as as time goes by, we will see more applications of this, right? And now I want, I wouldn't i mean, this is these are the opportunities and I know the opportunities are endless, but as you look ahead, so what's your, what how do you identify or what types of risk do you identify with the use of artificial intelligence in education?
00:12:12
Speaker
ah specifically based on your findings, but it seems like the performance is it bre improving. It's not 100% perfect, but what are the risks that you find? The risks are misinformation because it's like human beings. It's actually acting interestingly like human beings in the sense that if it doesn't know the information, it really makes up an information.
00:12:34
Speaker
Okay, it makes up. he's an e up bay ah right So it it actually says that all based on what my knowledge, I can come up with this answer, but that answer, and it can justify so well. For example, we found that even in incorrect answers, chat GPT gives you an incredible justification for it. Yeah.
00:13:00
Speaker
Yeah, so it's it can justify it and it can convince you that is the right answer Yeah, and I think that's a big misinformation that we have to be careful about we have to cross check it Because otherwise it can make up even sometimes the article You know, we found that not all the articles its sides are the real articles. So that's why there are risks that misinformation and there are risks of or also like um really justifying the information as if it's truth which may not be so those are the risks and limitations that I see and also like this is for you know
00:13:43
Speaker
information out there, it actually extracts from the information. How about when it comes to clinical settings? Clinical settings scenarios are much more complex as that we gave an example of, let's say, diagnosis or treatment plan, right? Yeah. So those are much more complex situations if you want to go to that route.
00:14:03
Speaker
then it's going to be even more um difficult to rely on the information information that it provides. So we have to be kind of cross-checking that. I'm glad that you brought up the um the term of misinformation as I was preparing for this interview ah I try to read a little bit more about misinformation and disinformation. I think it's very relevant in every aspect of in science, in in different the different in humanities, and in in biomedical sciences, anywhere, right? And it's really timely. but Everybody's talking about this. it's very
00:14:46
Speaker
it's It's very relevant in journalism, even. ah So it seems like the American Psychological Association defines misinformation as false and inaccurate information, as opposed to disinformation, which is the same thing, but deliberately in intent intended to mislead. right so So I like that you use the term a misinformation because you know one would wonder,
00:15:13
Speaker
it I mean, and again, philosophical question. I assume the best, the best, um ah the the the the best faith, the good faith of, you know, ah the every response AI gives us, right? So as you, I would agree with you and I would say a classified is as misinformation, but is there, should we be skeptical? Can be, can answers on, the you know, and directions that AI gives you in, especially as it relates to health,
00:15:42
Speaker
um could that Could they be misinformation? I don't know. I don't have an answer for this. I was just thinking about this. And how can we really protect ourselves? And how can we protect our learners? And what are the safeguards that we can put to make sure that you know as our learners use AI and they would use it, it will be silly for us to think that that our residents, are there there the new generation is not going to go to these resources. They will definitely use it.
00:16:11
Speaker
What are the safeguards that we should put there for them? so As programs, graduate programs, what should we how can we protect them? okay I think you're right. like Is there a risk for disinformation? Yes, there is a risk for disinformation. It's an intentionally like put there. and At what level that may come? It may come where the algorithms are you know kind of written and defined and how you put in what you put in that's what you take out so I think there is a risk there of course but I think what how can we protect the learners is this we have to
00:16:49
Speaker
um kind of this is an easy way because this think of it generative ai like a a personalized tutor i mean i have something an incredible wealth of knowledge in front of me and based on my questions i give the prompts It responds to my needs and it's teaching me towards my needs. This is an incredible tool that we never had. right It's personalized. You can ask a faculty, you can read the journals, you can read articles, but it's giving you directly personalized tutoring. That's why it's easy.
00:17:25
Speaker
It's timely and it's very past. It gives it to you everything in seconds. However, we have to be always cautious from a scientific perspective that we need to verify it. And we need to verify to a point that, like let's say, I read i use chat GPT in my daily life, in my studies everywhere. But when I see an article,
00:17:51
Speaker
It cites an article. I go to the article. That's my second step. It can give you an overall beautiful information. You can align your thinking. You can see you actually where you're missing your weaknesses. by When it generates some information, you read it. You see like, oh, I didn't know that. Oh, I didn't know that. But then you have to find that article. Go to that and read some of the important points and see really that is the case. You cannot rely 100% on it with the version is it is right now.
00:18:25
Speaker
and And we know it's all evolving. We know it's going to get only better. But at this point, we really cannot rely on it. We have to cross-check it. That's why um I think that's what I can suggest them. And that's what I do myself, too. Go to the article, check it, and make sure it's the correct information.
00:18:42
Speaker
I think you're absolutely right, because if you, if we, ah or our learners, our residents, our students, adapt and keep repeating the output of, you know, the wrong output, right? Then there is an enhanced circle of this, you know, of misinformation and putting, I guess,
00:19:10
Speaker
putting high frequency and high weight on on on something that might not be accurate. But as we recycle this inaccuracy online, it becomes ah stronger and stronger and more pronounced, and it can many people can take it as true. So it's a you're absolutely right. I think it's really important for us to leverage ah sources that we trust, right? So you see a paper, go to PubMed, make sure that this actually exists.
00:19:38
Speaker
And not only it exists, but the summary is exactly what the AI source claims that it is. So I think we have to do some kind of validation trials and make sure that what we hear is what it really is. But it's really, you're right, it's very helpful, it's very fast, and it's very tempting.
00:20:08
Speaker
And you know, Effie, it's actually, there is ah one benefit that I see in the early stages of, let's say you're planning a project or you're planning a presentation or you want to learn something new. It's really great tool in the sense that you can co-create with it, meaning that it really enhances my creativity.
00:20:31
Speaker
So I read something or it makes me think differently. And then I put another prompt and it expands it. So we are in this co-creative process.
00:20:43
Speaker
that I feel like it's really enhancing my abilities. And I think that's the most beautiful part in the beginning process. But once you are moving forward, you need to do your work to get to the accurate information. So then you are right in your next steps. I like what you're saying, the co-creativity. I really like this term very much.
00:21:07
Speaker
ah And it's something that I'm expanding now, be you open a very important topic, questions that we very much ah get they in the journal um related to um to what extent AI ah is allowed to be used by authors.
00:21:29
Speaker
ah right so and we This is ah another big philosophical ah question. like Who is the author? Is it Effie? Is it Sainur? Or is it the GPD? How much can an author um a how much can they also utilize the the AI resources for a paper?
00:21:53
Speaker
And i don't ah there are I don't think that there are policies and and and certain you know regulations as to you know how you write a paper. right You write the paper, the the the goal is for you to present the best um ah yeah the the best writing of of your of your work and the best reporting of your work. But I think that that there should be, and we are thinking about this and how we are going to develop our instructions to authors in the future, but I think that people should be transparent and open.
00:22:26
Speaker
like I used such a pity to edit you know my manuscript perfectly fine we use we all use English language language editing right I mean high school students use grammarly everybody uses language editing so That's perfectly well. Now, this is very different as to, you know, I threw out my data and I asked the source to write the entire paper for me. So that's very different. That's very different than, you know, just editing. So your thoughts on this? What do you, where are you with this?
00:23:03
Speaker
I think you're totally ah correct on the sense that we have to create policies and we have to promote more being transparent about it instead of policing it. You know, policing it comes from a fear. And fear-based, nothing expands. We have to say that we know the risks, we know the opportunities, here are the policies, and we want you to be transparent about the policies. Then the author will say, yes, these are the prompts that I used.
00:23:31
Speaker
Because, look, chatGPT app computer can't have all the information in the world for the last and you know thousand years. But chat GPT is not going to give you the information that you want to extract unless you ask the right question to it. yeah You see, the creative source is still me because creative source is my prompt, my question directed in a right way, which helps me to extract the right information for the big part of data. So I think a co-creation piece comes there. If I am transparent and I say, In this paper, I use these services from the generative AI using these prompts, these questions, or the I use this. So I give you the summary of my ah questions to it. Then you can transparently evaluate how much contribution is made by the author in a creative way versus how much contribution was made by the generative AI.
00:24:36
Speaker
So based on that, you can actually judge it and create policies, say that this much of a contribution we expect minimum for this paper to be authored by this person and disclose the generative AI was used in this capacity. And I think we have to create those policies and we have to be very open to it, as you said, like editing. This is like an editing tool. For example, it has, let's say,
00:25:06
Speaker
All the articles, maybe there are 200 articles in my topic. It has everything. I'm not going to be able to ah hold all 200 of them in my mind, but I need to know enough to ask it to say, can you look all those 200 articles and extract for me this relationship between this and this and explain to me the reason why and the rationale. And it comes and I take it, twist it. So yeah that's the co-creative process that I think is valuable. Another piece, Effie, I think it helps meet you all.
00:25:46
Speaker
You see, because ah the input it gives to me helps me to ask more creative questions. So actually, it helps me to get better and better and better. And that's the the piece. But we have to exactly find the policies and and know enough about it. Very good point, too. I think that this this point that you made in terms of the you know the interactive the the interaction between the the human and the, you know, the AI source and how you kind of evolve in your questioning, right? And your exploration and the curiosity. So, but also, of course, given that the human is curious, because there are people that will take an output and they rest on this, that's enough, I don't push my limits.
00:26:37
Speaker
I think if you want to put your limits and yout mean you want to remain curious, this interaction certainly helps you improve and enhances your knowledge. You're absolutely right. But we need and I agree about the policing. ah Absolutely true. We are not here to police have anybody. It will be a mistake to ah for us to police and limit the the the resources that AI offers to us. I think the most important thing is to be transparent.
00:27:05
Speaker
And so declare exactly to what extent, at to what magnitude we used the resource. And I think this is where scholarly publishing is going. I think it will be naive to to think that you know we forbid.
00:27:19
Speaker
ah you know that that that's that's ah really It will be we like we shoot ourselves on the foot if we we do something like this. And we alien alienate the that yeah the younger generation of authors.
00:27:33
Speaker
um What an amazing discussion. We can sit here and discuss. So tell me, is there anything that we didn't cover in relation to the paper? Like anything that you would like to summarize or anything that you would like to cover and kind of put some bottom line weight on it?
00:27:57
Speaker
So one thing I want to just you know emphasize the act the proficiency on the exam. With the first version, just I want to really emphasize this. First version 3.5 was 57.9% of the time accurate. I mean the proficient on the correct answers. The chat GPT4, early version of 4, was 73.6%.
00:28:23
Speaker
And the way they're all answered. The correct answer versus the wrong answer. yeah And then the version that we used, which were we did the December 2023 newer version, went up to 91.7% correct answers.
00:28:42
Speaker
Yeah, I think that's the thing that is improving because it's like bringing more um tools like we mentioned external sources and peer reviewed articles. earlier yeah So I think that's a great improvement. On the other hand, um as I mentioned, like even if it's incorrect, it gives a lot of amazing explanations.
00:29:05
Speaker
yeah emphasizing the wrong answers. And it's not still there to make good clinical scenarios, treatment plans or diagnosis. It's really, really not there yet. But is it gonna get there? Probably it will. I can see a future where we as clinicians are going to put in generative AI tools, the patients, all the variables.
00:29:28
Speaker
And then and then ah give some prompts saying that I want to create a fixed solution with implants. I want to create this for solution with implants. Can you give me some scenarios? Can you give me? And it's going to generate those two. So I think and are it's we are not there. but It will come to a point.
00:29:50
Speaker
even like ah we are using some tools for risk assessments like we will put some ah data like ah even people on their apps they're going to put data find out risk assessments and what to do for future prevention Yeah. And then even giving some, ah you know, a suggestions, recommendations, how to make lifestyle changes, even to maybe, you know, change their diet, and it's already out there. um What type of, um you know, supplements they should take, or what should they do? So very personalized, it's going to very personalized
00:30:30
Speaker
approaches to towards prevention of disease as well as treatment of disease in all human body, not just mouth, everywhere. yeah So we showed we showed in the article how this is also used in other adult medical education in different specialties and it's it's out there as well. So we have to be prepared generative AI with human creative co-creation process is going to be very much involved in health care and prevention and also treatment in a very personalized medicine approach. Yeah, no that this is a very good point and and I think that you're absolutely right this is where the future is and and and there will be a point where
00:31:14
Speaker
we we might be able to ah run risk assessment, ah who have easily accessible risk risk assessment tools and treatment planning tools where they you know they do they ah they offer a more holistic type of um approach to the the way we treatment plan patients and they it will be easily accessible for all of us.
00:31:38
Speaker
yeah so that's And everything and if everything will be linked, like your HPA1C, your vitamin l D levels, your insulin levels, yeah everything will be linked in one platform. yeah That is more of an integrated, will be much easier for sure. And that's not anymore sci-fi. That's like, we are we are there. We are almost there. We are almost there.
00:32:03
Speaker
um So tell our listeners how can they how can they find more about you, how can they follow you maybe on social media, your work, and anything that you would like to share with us and the public.
00:32:17
Speaker
Oh yeah, sure. I'm very active on Instagram. My Instagram account name is drdrsignorvardar. um Yeah, it's my purpose I follow you. Yes, I know. We follow each other and we, you know, reshare each other's stories or so. I enjoy that. It's my personal account, but anything that I am passionate about in life and work and these new developments, I share them there.
00:32:46
Speaker
So anybody, students, public, you know anyone wants to reach out to me, that's the best way. You can DM me. I'm really like active all the time, so I will answer. So I would be happy to connect with everyone.
00:32:59
Speaker
I think I'm looking forward to seeing you at the AAP in San Diego. And I think those discussions, especially in periodontal education related to the application of AI, are very timely and very relevant. And i'm um I would love to continue this conversation over there and with other program directors too and other clinicians that are in periodontal education. I think this is great.
00:33:23
Speaker
um So thank you so much. Thank you for ah joining us. Thank you for making the time to ah to review the paper with us, to philosophize with us, and talk about you know the exciting future in periodontology. I really enjoyed it.
00:33:44
Speaker
Thank you very much, Efi. It's always nice to have these conversations with you. It's fun. Thank you very much. I appreciate it. It's fun and I love the curiosity and I love the opportunities that you highlighted with the paper in our conversation. And to our listeners, if you listeners like this episode, if you like,
00:34:06
Speaker
Probing Perio, ah share the episode with your friend, subscribe to the podcast so you know all the latest work um and in and whatever platform platform you are listening and try to rate us and leave a review. We like comments, we like feedback, we listen to you and we would like to improve. Thank you everybody and thank you Seinur. Thank you. but Bye bye. Bye bye.