AI, Ethics, and the New Science Classroom
Artificial Intelligence (AI) is simultaneously one of the most promising tools for advancing science education and one of its greatest challenges, a tension highlighted by the fact that nearly 80% of undergraduates already use it for their studies (Chegg, 2025). The rapid integration of AI into the population’s daily lives since late 2022 (Figure 1) has created a conflict for educators. AI offers tools that can personalize learning and make complex science accessible; however, it also poses a significant threat to academic integrity by challenging traditional methods of student assessment.

Figure 1. The rise in worldwide search interest for the terms “AI” and “ChatGPT” since 2022. The rapid spike beginning in late 2022 visually represents the rapid integration of AI into the public consciousness.
In science courses, AI is already proving to be a powerful pedagogical tool. For example, intelligent tutoring systems (ITS) can provide students with personalized support, offering real-time guidance on difficult concepts at their own pace, much like a dedicated personal tutor. If concepts such as stoichiometry in chemistry or the cell cycle in biology aren’t making sense, an ITS can explain its process, adjust the difficulty for the next problem, and keep the student engaged. Platforms such as Carnegie Learning and Squirrel AI use this adaptive technology to identify a student's area of weakness and personalize their learning, thereby ensuring a solid grasp of the concept. Additionally, AI is making science courses more accessible to students with learning disabilities by offering tools that provide real-time transcription of lectures, summaries of large texts, and audio descriptions of visual information. This ensures that all students have an equal opportunity to engage with and master the material.
Beyond personalized tools, AI is also transforming hands-on education through virtual labs. These platforms allow students to conduct experiments that would otherwise be too dangerous, expensive, or time-consuming for a traditional classroom. Platforms such as Labster allow students to explore AI-guided experiments virtually, from chemical reactions to CRISPR gene editing. These experiments create opportunities for fun, safe, and engaging learning experiences.

Figure 2. A survey of common academic uses for AI reported by undergraduate students. While tasks such as brainstorming and summarizing are prevalent, the use of AI to complete assignments and generate texts raises significant academic integrity concerns (Ravšelj et al. 2025).
The same generative power these learning tools use also powers a challenge to academic integrity. It has become increasingly difficult to distinguish between human and AI-generated work, as students are turning to AI to write papers, generate code, or solve complex math problems (Figure 2). This challenge is heightened by the unreliability of AI detection software. As illustrated in Figure 3, AI detection software can incorrectly flag authentic student work. This could create a climate of suspicion in which students with unique writing styles or non-native English speakers are unfairly penalized. An over-reliance on such software is therefore flawed. This reality renderstraditional assessments, such as take-home essays and problem sets, increasingly vulnerable.

Figure 3. High Rate of Misclassification of Student Work by AI Detection Software. Graph shows how a detector scored known student and AI-generated texts. A significant portion of authentic work (yellow) was incorrectly flagged as “Probably AI” or “Definitely AI”, highlighting the unreliability of these tools (Revel et al. 2024).
This contrast places educators in a challenging position between innovation and maintaining academic honesty. The central conflict is no longer merely a matter of preventing plagiarism; it represents a fundamental pedagogical challenge. To outright ban AI and these tools appears counterproductive, as it would deny students access to a technology that is changing the scientific landscape. However, if we ignore how these tools are misused, we risk neglecting fundamental aspects of education, such as developing critical thinking skills and achieving true understanding of material.
In response to the challenges created by AI, institutions are actively developing and implementing a range of strategies. These proposed solutions move beyond simple detection, focusing instead on adapting educational frameworks through new policies, emphasizing AI literacy, and fundamentally redesigning student assessment.
A primary institutional response has been to establish clear policies beyond requiring AI citations and to deepen ethical approaches. These approaches are designed to address more nuanced challenges, encouraging discussion on the limitations and biases of AI models. These models are usually trained on public datasets that can perpetuate societal biases, which are a major concern for scientific education. Furthermore, these policies confront the growing inequities in access to premium AI tools, creating a new divide and giving those with greater financial resources a distinct advantage. Therefore, effective institutional policies must not only call for proper citation but also guide faculty in developing equitable assessments that account for these complex challenges (Chan 2023).
Recognizing that outright bans are often ineffective, many educators are integrating critical AI literacy into their curricula. The goal of this approach is to transform students from passive consumers of AI-generated content into critical users. One such strategy is the Peer and AI-Review and Reflection (PAIRR) model, in which students receive feedback from both AI and peers. In this model, students critique the AI’s feedback, a process that improves their writing while simultaneously teaching them to recognize the strengths and limitations of AI-generated suggestions (Sperber et al. 2025). Additionally, institutions like Princeton University and UCLA are encouraging faculty to design assignments in which students are explicitly tasked with using an AI tool and then fact-checking, analyzing, and critiquing its response (Mangan, 2024).
Lastly, traditional essays and exams are becoming increasingly vulnerable to AI; educators are now using other forms of assessments that are more resilient to AI misuse because they target skills that AI cannot easily replicate. For instance, Davey et al. (2025) piloted an interactive oral assessment in an undergraduate bioscience course. Their study found that this method not only improved student performance from an average final grade of 68% to 75%, but was also effective at reducing academic misconduct from 27 formally investigated cases to 1. This interest in oral examinations is a direct response to the need for assessments that promote deeper understanding and real-time reasoning while protecting academic integrity.
The rise of AI presents a transformative moment in science education, requiring a new approach to teaching and learning. Rather than banning a tool that is becoming vital to scientific discovery, the scientific community should embrace the challenge it offers. A combination of clear ethical policies, AI literacy training, and assessment redesign provides a strong framework for this new approach. However, these strategies should not be viewed as a final solution, but as evolving responses. Ultimately, the future of science education will not teach students to avoid AI detection; it is about creating a new generation of scientists who know how to use these tools ethically, critically, and wisely. Successful integration of these tools will ensure that future scientists are equipped not only to use AI, but will also challenge it.
Note: Eukaryon is published by students at Lake Forest College, who are solely responsible for its content. This views expressed in Eukaryon do not necessarily reflect those of the College. Articles published within Eukaryon should not be cited in bibliographies. Material contained herein should be treated as personal communication and should be cited as such only within the consent of the author.
References
Chan, C.K.Y. (2023). A comprehensive AI policy education framework for university teaching and learning. Int J Educ Technol High Educ 20, 38 (2023). https://doi.org/10.1186/s41239-023-00408-3
Chegg (2025). How many students in higher ed use generative AI? Chegg Institutions. https://institutions.chegg.com/blog/how-many-students-in-higher-ed-use-generative-ai?
Bittle, K., & El-Gayar, O. (2025). Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information, 16(4), 296. https://doi.org/10.3390/info16040296
Davey, S. K., Birbeck, D., Nallaya, S., Sallows, G., & Della Vedova, C. B. (2025). Utilising one-on-one interactive oral assessments as the major final assessment within a bioscience course. Higher Education Research & Development doi:10.1080/07294360.2023.2289947
Google. (2025). Gemini (Large language model). https://gemini.google.com
Mangan, T. (2024, July 3). How to craft a generative AI use policy in higher education. EdTech Magazine.https://edtechmagazine.com/higher/article/2024/07/how-craft-generative-ai-use-policy-higher-education-perfcon
Ravšelj et al. (2025). Higher education students' perceptions of ChatGPT: A global study of early reactions. PloS one, 20(2), e0315011. https://doi.org/10.1371/journal.pone.0315011
Revell, T., Yeadon, W., Cahilly-Bretzin, G. et al. ChatGPT versus human essayists: an exploration of the impact of artificial intelligence for authorship and academic integrity in the humanities. Int J Educ Integr 20, 18 (2024). https://doi.org/10.1007/s40979-024-00161-8
Sperber, L., MacArthur, M., Minnillo, S., Stillman, N., & Whithaus, C. (2025). Peer and AI Review+ Reflection (PAIRR): A human-centered approach to formative assessment. Computers and Composition, 76, 102921..https://doi.org/10.1016/j.compcom.2025.102921