Integrity

Artificial intelligence technologies invite us to reconsider how their use might be aligned with essential integrity principles in an academic setting. Below are articles covering this topic.

We believe the emergence of ChatGPT creates an opportunity for schools and post-secondary institutions to reform traditional approaches to assessing students that rely heavily on testing and written tasks focused on students’ recall, remembering and basic synthesis of content.


Introduces ChatGPT and discusses the academic integrity implications and concerns raised by researchers and educational institutions and the importance of rethinking learning assessment approaches and strategies.


Note: After a few access to this article, the Globe and mail website will ask you to Register for FREE to read more.


More worrisome are the effects of ChatGPT on writing scientific papers. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. That’s a lot of AI-generated text that could find its way into the literature soon.


There are few current issues in education that have provoked more interest – or sounded more alarms – than artificial intelligence (AI) technology. While the issue has simmered for some time, it burst into the forefront of debate following OpenAI’s public release of ChatGPT.


This study evaluated the ability of ChatGPT, a recently developed artificial intelligence (AI) agent, to perform high-level cognitive tasks and produce text that is indistinguishable from human-generated text.


The release of ChatGPT has everyone abuzz about artificial intelligence. I’ve been getting lots of questions about our research project Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies.


This post is part of an ongoing series addressing factors that may lead to academic dishonesty and strategies for combating it.