Background: Generative artificial intelligence [AI] is rapidly influencing medical education by offering instant explanations, summarization, and interactive academic support. However, student experiences with these tools remain context-dependent, and qualitative evidence from small-island medical education settings is limited.
Objectives: To explore medical students' experiences with generative AI in learning, identify perceived benefits and concerns, and examine differences between preclinical and clinical learners.
Methods: A qualitative descriptive study was conducted at All Saints University School of Medicine, Dominica, from August 2025 to January 2026. A total of 120 medical students participated. Data were collected using a semistructured questionnaire containing demographic items and open-ended questions on patterns of use, educational value, reliability, ethical concerns, and institutional expectations. Responses were analyzed using inductive thematic analysis.
Results: Five major themes emerged. Students commonly described generative AI as an accessible academic support tool and a means of improving learning efficiency and self-directed study. At the same time, they expressed concerns regarding inaccurate or oversimplified information, dependency, erosion of critical thinking, and uncertainty about acceptable academic use. Preclinical students emphasized concept clarification and revision support, whereas clinical students more strongly highlighted contextual reliability, professionalism, and patient-safety implications.
Conclusion: Students viewed generative AI as a valuable adjunct to medical learning but not as a substitute for standard resources, faculty guidance, or independent reasoning. Medical curricula should incorporate structured AI literacy, critical appraisal of AI outputs, and explicit ethical guidance for responsible educational use
Generative artificial intelligence [AI], particularly large language model-based tools, has rapidly entered medical education and is now being discussed as both a pedagogic opportunity and a source of significant academic disruption [1,2]. These tools can generate summaries, answer questions in conversational language, assist with case formulation, draft educational content, and provide immediate academic support at the point of need. Their speed, accessibility, and interactive format make them especially attractive to medical students who are expected to process large volumes of information across the preclinical and clinical phases of training [3,4].
Current literature suggests that large language models can support explanation of difficult concepts, personalize revision, stimulate self-directed learning, and expand access to learning resources [5]. Reviews in medical education have highlighted their potential to assist with tutoring, assessment preparation, feedback generation, curriculum adaptation, and learner engagement [4]. At the same time, these tools are not neutral educational instruments. Their usefulness depends on the quality of prompts, the learner's prior knowledge, and the ability of users to recognize incomplete, fabricated, or contextually inappropriate outputs [4].
Concerns about generative AI in medical education are equally prominent. Published studies have drawn attention to hallucinated content, inconsistency, weak contextual reasoning, academic integrity concerns, and the possibility that overreliance could weaken reflective learning and critical thinking [7]. Ethical questions are particularly relevant in professional education, where the boundaries between legitimate academic assistance and inappropriate outsourcing of cognition remain insufficiently defined. Students are also entering clinical environments in which inaccurate information, poor source transparency, and uncritical use of AI-generated responses could create downstream consequences for professionalism and patient safety [7,10].
Empirical work among medical and health-professions students shows a broadly similar pattern: high awareness and growing use of generative AI, combined with a cautious assessment of trustworthiness and ethics [6]. Cross-sectional and mixed-methods studies have reported that students value AI for efficiency, accessibility, and rapid clarification, but they continue to express concern about accuracy, misuse in assignments, and the need for formal curricular guidance [6-10]. Qualitative studies have further shown that student perceptions are shaped by training stage, educational context, and expectations regarding responsible use [8-10]. Despite this expanding evidence base, context-specific qualitative data from Caribbean medical schools remain limited.
Against this background, the present study was undertaken at All Saints University School of Medicine, Dominica, to explore how medical students experience generative AI within their academic learning environment. The objectives of the study were to examine patterns of student use, identify perceived academic benefits, explore concerns related to accuracy, dependency, and ethics, and compare broad differences in perspectives between preclinical and clinical students.
METHODOLOGY
Study design and setting
This study used a qualitative descriptive design to obtain a practice-oriented understanding of students' experiences with generative AI in medical education. A qualitative descriptive approach was selected because the study sought to capture students' views in language close to their own experiences, rather than to generate a formal theory. The study was conducted at All Saints University School of Medicine, Dominica, over a six-month period from August 2025 to January 2026. The reporting and organization of the qualitative methods were informed by established guidance for transparent qualitative research and thematic analysis [11-13].
Participants and sampling
The study population comprised undergraduate medical students enrolled in the preclinical and clinical phases of training. Students who were currently registered and willing to provide informed consent were eligible to participate. Interns, graduates, and incomplete responses were excluded from the final analysis. Recruitment was carried out through institutional student communication channels using a voluntary response approach. A broad invitation strategy was adopted to ensure that students with differing levels of prior exposure to generative AI could be represented. A total of 120 valid responses were included in the final dataset, including 64 preclinical students and 56 clinical students.
Data collection
Data were collected using a semistructured questionnaire administered in electronic form. The first section captured participant characteristics, including phase of study, sex, and prior exposure to generative AI for academic purposes. The second section contained open-ended questions designed to elicit students' experiences regarding how they used generative AI, situations in which they found it helpful, perceived limitations, concerns about reliability, fears of overdependence, ethical uncertainties, and suggestions for how medical schools should respond. Open-ended written responses were selected to permit inclusion of a relatively large number of participants while preserving qualitative depth across recurring experiences. Participation was anonymous, and no identifying personal data were collected within the analytical dataset.
Data analysis
All responses were reviewed in full before coding. Inductive thematic analysis was then undertaken following familiarization with the dataset, generation of initial codes, grouping of related codes into candidate categories, theme development, theme review, and refinement of thematic definitions [12,13]. Two reviewers independently examined the responses and developed an initial coding framework, after which differences in coding and interpretation were resolved through discussion and consensus. Theme frequencies were calculated to indicate the breadth of endorsement across participants, recognizing that individual respondents could contribute to more than one theme. Data collection and analysis proceeded iteratively, and thematic sufficiency was considered reached when later responses did not add substantively new codes or conceptual insights, consistent with published guidance on qualitative saturation [14].
Ethical considerations
This study was conducted after obtaining institutional permission from All Saints University School of Medicine, Dominica. Electronic informed consent was obtained from all participants prior to response submission. Participation was voluntary, and students were assured that refusal to participate would not affect their academic standing in any way. Participant anonymity and confidentiality were maintained throughout the study, and all data were analyzed and reported in aggregate form for research purposes
RESULTS
A total of 120 medical students from All Saints University School of Medicine participated in the study. The sample included representation from both the preclinical and clinical phases of training, enabling comparison of experiences across stages of medical education. Of the 120 participants, 64 [53.3%] were preclinical students and 56 [46.7%] were clinical students. Female students accounted for 68 [56.7%] participants and male students for 52 [43.3%]. Prior exposure to generative AI for academic use was common, with 98 [81.7%] students reporting at least occasional use and 22 [18.3%] reporting minimal or no regular use [Table 1].
Table 1. Participant characteristics of the qualitative study sample [N = 120]
|
Variable |
Category |
n |
% |
|
Phase of study |
Preclinical students |
64 |
53.3 |
|
|
Clinical students |
56 |
46.7 |
|
Sex |
Female |
68 |
56.7 |
|
|
Male |
52 |
43.3 |
|
Prior exposure to generative AI for academic use |
At least occasional use |
98 |
81.7 |
|
|
Minimal or no regular use |
22 |
18.3 |
Figure 1: Participant characteristics of the qualitative study sample
Inductive thematic analysis identified five major themes. The most frequently endorsed theme was generative AI as an accessible academic support tool [102/120, 85.0%], followed by improved efficiency, understanding, and self-directed learning [91/120, 75.8%]. Concerns regarding accuracy, reliability, and incomplete medical context were reported by 84 [70.0%] participants. Fear of dependency and erosion of critical thinking was identified in 76 [63.3%], while 88 [73.3%] expressed ethical uncertainty and the need for institutional guidance [Table 2].
Table 2. Major themes identified from qualitative analysis
|
Theme No. |
Major theme |
Participants endorsing theme [n] |
% of total sample |
|
1 |
Generative AI as an accessible academic support tool |
102 |
85.0 |
|
2 |
Improved efficiency, understanding, and self-directed learning |
91 |
75.8 |
|
3 |
Concerns regarding accuracy, reliability, and incomplete medical context |
84 |
70.0 |
|
4 |
Fear of dependency and erosion of critical thinking |
76 |
63.3 |
|
5 |
Ethical uncertainty and the need for institutional guidance |
88 |
73.3 |
Figure 2: Major themes identified from qualitative analysis
Students most often described generative AI as a readily available supplementary academic assistant. Participants reported using these tools to simplify difficult topics, summarize large reading loads, generate plain-language explanations, and support revision before examinations. This pattern was especially prominent among preclinical students, who frequently referred to anatomy, physiology, and biochemistry as areas in which conversational AI helped them understand foundational concepts more quickly.
A second major pattern was the perception that generative AI improved academic efficiency and supported self-directed learning. Students described saving time in searching for information, organizing notes, preparing outlines, generating mnemonics, and creating practice questions or case-based prompts. Many participants felt that AI tools adapted explanations to their level of understanding and increased confidence when approaching unfamiliar topics. Together, the subthemes and interpretive summary indicate that students perceived generative AI as both a productivity aid and an individualized support resource [Table 3].
Table 3. Theme-wise subthemes and interpretive summary
|
Major theme |
Key subthemes |
Interpretive summary |
|
Generative AI as an accessible academic support tool |
Simplification of complex topics; summarization of lengthy material; plain-language explanations; revision support; immediate clarification |
Students viewed generative AI as a readily available supplementary academic assistant, especially useful when faculty or peers were not immediately accessible. |
|
Improved efficiency, understanding, and self-directed learning |
Time-saving; note organization; individualized explanations; mnemonics; practice questions; case scenarios; assignment support |
Participants felt that AI improved study efficiency and promoted self-directed learning by tailoring explanations to their level of understanding. |
|
Concerns regarding accuracy, reliability, and incomplete medical context |
Incorrect or outdated responses; oversimplification; lack of contextual depth; need for verification from textbooks, literature, and faculty |
Students acknowledged the usefulness of AI but emphasized that it could not replace standard medical resources because of concerns about trustworthiness. |
|
Fear of dependency and erosion of critical thinking |
Passive learning; reduced textbook reading; weaker reflective reasoning; overreliance on ready-made answers |
Participants worried that excessive use of AI could reduce active engagement and impair the development of analytical and critical thinking skills. |
|
Ethical uncertainty and the need for institutional guidance |
Unclear boundaries for assignments; concerns about academic misconduct; lack of policy; need for workshops and formal instruction |
Students expressed uncertainty regarding the ethical use of AI in medical education and strongly favored institutional guidance for responsible use. |
Despite these benefits, students repeatedly emphasized concerns regarding the trustworthiness of AI-generated information. Participants stated that some outputs were vague, incomplete, outdated, or overly confident despite being inaccurate. Clinical students were especially cautious about relying on AI for disease interpretation, differential diagnosis, treatment-related learning, and patient-centered reasoning. Across responses, students consistently indicated that generative AI could support learning, but could not replace standard textbooks, peer-reviewed sources, or faculty instruction.
Fear of dependency constituted another important theme. Students worried that easy access to summaries and ready-made answers could encourage passive learning, reduce deep reading, and weaken the development of critical thinking. Several respondents explicitly distinguished between guided use and substitutive use, describing AI as beneficial when it supported their own reasoning but problematic when it displaced independent intellectual effort.
Ethical uncertainty emerged as a strong cross-cutting concern. Students were often unsure whether using generative AI for assignments, reflective writing, case discussions, or assessment preparation should be considered legitimate academic support or a form of misconduct. Many participants therefore called for formal institutional guidance, including explicit policy statements, workshops on responsible use, and curricular teaching on verification of AI outputs, citation practice, professionalism, and patient safety.
Comparative observations further showed that preclinical and clinical students used generative AI for different educational purposes. Preclinical students more often emphasized concept clarification, simplification of foundational sciences, and examination support. In contrast, clinical students more frequently highlighted contextual reliability, responsible interpretation, and the risks of applying AI-generated content to clinical reasoning without supervision. Clinical respondents also demonstrated greater awareness of professionalism and patient-safety issues related to inappropriate AI use [Table 4].
Table 4. Comparative observations between preclinical and clinical students
|
Domain |
Preclinical students |
Clinical students |
|
Primary academic use |
Concept clarification in anatomy, physiology, and biochemistry |
Review of disease mechanisms, differential diagnosis, and treatment principles |
|
Perceived benefit |
Simplification of foundational concepts and examination support |
Integration of theoretical knowledge with patient-oriented learning |
|
Main concern |
Potential for dependence on simplified material |
Accuracy, contextual depth, and inappropriate application to clinical reasoning |
|
View on reliability |
Useful for introductory understanding and summaries |
Useful only as a supplementary aid, not for independent clinical decision-making |
|
Ethical and professional awareness |
Focused more on academic utility |
Showed greater concern regarding professionalism, patient safety, and responsible use |
Table note: In qualitative studies, a participant can contribute to more than one theme. Therefore, thematic frequencies are not mutually exclusive and should not be summed across themes.
Overall, the results depict a balanced student perspective. Generative AI was viewed as useful, accessible, and educationally efficient, yet students clearly resisted the idea that it could function as an autonomous or definitive learning authority. Across all major themes, participants favored supervised, critical, and ethically guided integration of generative AI into medical education.
DISCUSSION
This study found that medical students at All Saints University School of Medicine held a broadly receptive yet cautious view of generative AI. The dominant pattern in the data was not simple enthusiasm or outright resistance, but conditional acceptance. Students recognized that these tools could improve access to explanations, accelerate routine study tasks, and support independent learning, but they simultaneously identified substantial limitations related to trustworthiness, critical thinking, and responsible use. This overall balance aligns with contemporary reviews and conceptual papers that describe large language models as promising educational adjuncts whose value depends on careful integration rather than uncritical adoption [5].
The strongest positive findings in the present study related to accessibility and efficiency. Participants viewed generative AI as an always-available assistant that could translate difficult concepts into more understandable language and reduce the time required to organize study materials. Similar benefits have been described in cross-sectional and mixed-methods studies among medical and health-professions students, in which learners valued AI for rapid clarification, convenience, and individualized academic support [6-8]. The present study extends that work by showing that such advantages are also strongly perceived in a Caribbean medical school setting, suggesting that the appeal of generative AI crosses geographical and curricular contexts.
At the same time, students in this study demonstrated a mature awareness of the limitations of AI-generated content. Concerns regarding hallucination, oversimplification, and lack of clinical nuance closely mirror published observations that large language models can produce fluent but unreliable educational outputs [9,10]. The more cautious stance among clinical students is particularly important. As learners move closer to patient-facing responsibilities, the requirement for contextual reasoning, source verification, and professional judgment becomes more immediate. The stronger emphasis by clinical students on reliability, professionalism, and patient safety is therefore educationally meaningful and consistent with literature calling for stage-sensitive AI integration in medical curricula [7,10].
The themes of dependency and ethical uncertainty also deserve particular attention. Students were not merely concerned about cheating in a narrow procedural sense; they were concerned that habitual reliance on AI could weaken active engagement with learning, limit reflective reasoning, and blur the boundary between assistance and substitution. Similar concerns have been reported in prior student studies addressing academic integrity, responsible authorship, and the need for explicit ethical guidance in medical education [10]. These findings indicate that institutional silence is itself a problem. When students do not know what constitutes acceptable use, practice becomes inconsistent and ethically vulnerable.
Taken together, the results support a curricular response that is proactive rather than prohibitive. Medical schools should teach students how to use generative AI critically, how to verify outputs against standard references, how to disclose AI assistance transparently where required, and how to understand the professional consequences of inaccurate or inappropriate use. Faculty development is equally important, because students are unlikely to adopt responsible practices in the absence of coherent institutional modeling and assessment policy [5,7,10]. In this sense, the present findings support structured AI literacy, source appraisal, and professionalism training as integral components of contemporary medical education.
LIMITATIONS
This study was conducted in a single medical school and therefore reflects one institutional context. Participation was voluntary, introducing self-selection bias. Experiences were self-reported and were not triangulated with observed use or academic performance. Data were collected through open-ended survey responses rather than in-depth interviews, which limited probing of nuanced perspectives. Theme frequencies were descriptive and should not be interpreted as quantitative effect estimates.
CONCLUSION
In conclusion, medical students in this study perceived generative AI as a useful supplementary learning tool that improved accessibility, efficiency, and support for self-directed study across both preclinical and clinical training. However, they also identified clear concerns regarding factual accuracy, contextual depth, overdependence, and ethical ambiguity. The findings indicate that students do not seek unrestricted AI use; instead, they prefer guided and accountable integration. Medical schools should respond by embedding AI literacy, source verification, academic integrity guidance, and professionalism-oriented discussion into the curriculum. Such an approach can help students benefit from generative AI while preserving critical thinking, responsible scholarship, academic honesty, and the standards expected in future clinical practice.
REFERENCES