The use of AI tools in academia is a complex and nuanced issue that differs from more straightforward cases of academic dishonesty, such as plagiarism. While plagiarism involves the deliberate act of copying someone else’s work without attribution, the use of AI tools often falls into a grey area where the lines between original work and AI-generated content is blurry at best.
The difference between crossing the academic integrity line and staying within acceptable boundaries can be as simple as prompting an AI tool to do just one more task or using it for feedback on a final essay instead of a practice one. This raises the question: why is getting feedback and suggestions from AI prohibited when seeking the same assistance from a human tutor would be considered acceptable? The subjective nature of these boundaries can lead to confusion and frustration among students trying to navigate the ethical use of AI tools in their academic work.
As students seek to adopt strategies that remain ethical, and maximise the benefits of studying alongside AI, the lack of clear guidelines often leads students to give up trying to do the right thing and instead focus on not getting caught. Alongside this, the increasing sophistication of AI tools, and the growing confidence that AI detection is either ineffective or can be easily side-stepped, further complicates the issue. There is a perfect storm brewing because students are not adequately guided on how to use AI in ways that promote authentic learning, and there exists no realistic consequences for students who cross the ethical line.
The solution lies in a shift in mindset that sees educators design learning experiences and assessments with AI in mind.
Josh Tuohy Share
Universities need to pursue policies which benefit the cohort of students seeking to get the most from their education, rather than adopting a position in response only to those who would use AI to circumvent learning. And such policies need not be mutually exclusive. The solution lies in a shift in mindset that sees educators design learning experiences and assessments with AI in mind, rather than attempting regulation.
Educators should aim to create assessments that effectively measure students’ knowledge and skills, regardless of their use of AI tools. By designing assessments that prioritize the demonstration of higher-order thinking skills and the application of knowledge in novel contexts, universities can allow students to use these tools freely without compromising the integrity of the assessment process. The specific nature of these assessments will depend on the unique needs and goals of each institution, but the overarching objective should be to create an evaluation framework that is resilient to the increasing presence of AI in academia.
The age of AI presents significant challenges for students navigating academic integrity policies, but it also offers tremendous opportunities for enhancing learning and skill development. By designing learning experiences and assessments that acknowledge and accommodate the presence of AI tools, universities can foster a culture of responsible and innovative AI that has the potential to enrich all participants in the teaching and learning ecosystem. Ultimately, embracing the existence of AI in education while maintaining the integrity of the learning process will benefit both students and educators alike, preparing them for the challenges and opportunities of the future.
MadeWithData supports leaders and educators to realise the potential benefits of Gen AI technologies both inside, and outside, the classroom. Reach out to learn more about our AI Programs for Educators, designed to help you effectively integrate AI into your teaching practices and shape the future of learning.
Connect with us to discover how our business leverages AI every day
If you have questions, need training, or want someone to help you implement the use of AI in your business, contact us for a complimentary 'AI Splash' and start the journey.