FAQ for professors
There is also an AI FAQ for students on the Academic Integrity for Students webpage (last section of the page).
While many generative AI systems have recently become available, ChatGPT is currently the most prominent, garnering worldwide media attention. This AI tool uses predictive technology to create or revise texts, including essays, computer code, lesson plans, poems, reports and letters. The products created are generally of good quality, although they may contain inaccuracies. We encourage you to try the system to test its capabilities and limitations.
In this FAQ, ChatGPT refers to the free, online AI chat system that uses OpenAI GPT technology. It is just one of several generative AI tools currently available.
Click on a question below to see the answer...
How can I test out ChatGPT to see its capability?
Instructors are welcome and encouraged to test ChatGPT, use of which is currently free upon registration. You can also test other similar AI tools to assess their capability, for instance to see if they can respond to the assignments used in your courses, or the way in which they improve the readability and grammar of a paragraph. Experimentation is also useful to assess the limits of the tool.
Please note that due to high demand, access to ChatGPT is at times unavailable.
Is ChatGPT accurate and reliable?
Large Language Models, like ChatGPT, are trained to predict the next word in a sentence, given the text that has already been written. Early attempts at addressing this task (such as the next-word prediction on a smartphone keyboard) are only coherent within a few words, but as the sentence continues, these earlier systems quickly digress. A major innovation of models such as GPT is their ability to pay attention to words and phrases which were written much earlier in the text, allowing them to maintain context for much longer and in a sense remember the topic of conversation. This capacity is combined with a training phase that involves looking at billions of pages of text. As a result, models like ChatGPT, and its underlying technology GPT-3, are good at predicting what words are most likely to come next in a sentence, which results in generally coherent text.
One area where generative AI tools often fail is in repeating facts or quotations. To a model trained to sound convincing, the only important aspect of a fact is that it sounds like a fact. This means that models like GPT-3 frequently generate claims that sound real, but to an expert are clearly wrong.
Related areas where ChatGPT seems to struggle include citations and discussion of any event or concept that has received relatively little attention in online discourse. To assess these limitations, you could try asking the system to generate your biography. Unless there are numerous accurate biographies of yourself online, ChatGPT is unlikely to generate a comprehensively correct biography.
What are the ethical considerations regarding the use of generative AI systems?
This is a threshold question that instructors may want to consider. Mainstream media has been covering this issue extensively, and alternate viewpoints are widely available.
Given that ChatGPT is trained on materials that are available online, it is possible that it will repeat biases present online. OpenAI has invested substantial effort into addressing this problem, but it remains a danger with these types of systems. You may also want to familiarize yourself regarding questions about the way the technology was developed and trained (e.g., who were the persons who trained it?), the way we use the responses it provides, and the long-term impacts of these technologies on the world.
The Provost is consulting with faculty and staff experts on these larger questions involving ChatGPT and other generative AI systems, and welcomes debate and discussion on these issues.
Can I use generative AI tools for pedagogical purposes in my classroom?
Yes. Some instructors may wish to use the technology to demonstrate how it can be used productively or what its limitations are. The TLSS is developing more information and advice about how you might use generative AI as part of your learning experience design.
Remember that asking or requiring your students to access these tools is complicated because the University has not vetted them for privacy or security. The University generally discourages using such systems for instruction until we are assured that the system protects personal data (e.g., the email address used to register on the system). If, as media reports have suggested, a version of GPT becomes part of the Office365 suite, it may become part of the software suite available to all students and faculty. However, until the University formally approves the use of specific generative AI software, these tools should be considered with the same caution as other third-party applications that ingest personal data.
If you decide to ask or encourage students to use this or other AI systems in your courses, there are a few issues to consider before you do so:
- There may be some students who are opposed to using AI tools. Instructors should consider offering alternative forms of assessment for those students who might object to using the tools, assuming that AI is not a core part of the course.
- Instructors should consider indicating on their syllabus that AI tools may be used in the course and, as relevant, identify restrictions to this usage regarding learning outcomes and assessments.
- Be aware that not everything that generative AI technology produces is correct. You may wish to experiment with ChatGPT to see what errors it generates; citations are often fabricated, and inaccurate prompts are sometimes taken as fact.
- There is a risk that ChatGPT may produce plagiarized text or perpetuate biases inherent in the material on which it was trained.
Are students permitted to use AI tools to complete assessments?
The University expects students to complete assignments independently, without any outside assistance, unless otherwise specified. Instructors are strongly encouraged to speak to their students about what tools are permitted to complete assessments. Written assignment instructions should indicate what tools are permitted; vague references to ‘the internet’ will generally not suffice today.
Suppose an instructor indicates that AI tools are not permitted on an assessment, and a student is later found to have used such a tool on the assessment. In that case, the instructor should consider meeting with the student as the first step of a process under the Code of Behaviour on Academic Matters.
Some students may ask if they can create their assignment outline or draft using ChatGPT and then edit the generated first draft; consider in advance of discussing the assignment with your students what your response to this question might be, and perhaps address this question in advance.
You may wish to consider some of the tips for assessment design below. You might also consider meeting with or attending a workshop at your local Teaching Centre to get more information about assignment design. Consider your learning goals for the assignment and how you can best achieve those considering this new technology.
Would the University classify use of generative AI systems as an academic offence?
If an instructor specified that no outside assistance was permitted on an assignment, the University would typically consider use of ChatGPT and other such tools to be use of an “unauthorized aid” under the Academic regulation I-14, or as “any other form of cheating”.
It is also vital to note that because generative AIs are trained on existing data, they are at risk of ‘generating’ text that was in fact written by a real person in the past. This can result in a student unintentionally plagiarizing a source on which the model was trained.
Can I or should I use one of the new AI-detectors such as GPTZero?
The University discourages the use of AI-detectors on student work. The quality of such detectors has not yet been confirmed, and AI technology is developing at a swift enough pace that the detectors are unlikely to keep up with the technology itself. For instance, some of the detectors base their assessment of whether a piece of writing was generated by AI on the level of sophistication. Making assumptions that a relatively simply phrased assignment is the work of an AI tool would have significant negative impacts on students. Instead, consider some of the tips below on assessment design. You may also wish to consider submitting your assessment topic into ChatGPT to see what type of answer comes out.
Sharing your students’ work with these detectors without their permission also raises a range of privacy and ethical concerns. The University has noted that companies like Turnitin (which owns Ouriginal) are working on their own versions of detectors.
How can I prevent students from using ChatGPT or similar tools on my assignments?
Talking to students about ChatGPT and its limitations will let students know that you are well aware of the technology and will likely generate interesting discussion and help to set guidelines for students. Let students know clearly, both verbally and in assignment instructions, what tools may or may not be used to complete the assignment. Advise students of the limitations of the technology and its propensity to generate erroneous content.
If you choose not to allow use of AI tools on your assignments, here are some tips for generating assignments to which generative AI systems will have difficulty responding. Some include:
- asking students to respond to a specific reading, particularly one that is from the last year and may not be on the internet or may not have generated much commentary online. Generative systems struggle to create accurate responses to prompts for which there is little or no information on the internet.
- ask students to create a video or recording that explains or expands on their work.
- use a flipped classroom approach and/or assign group work to be completed in class, with each member contributing.
- ask students to create a first draft of an assignment, or an entire assignment, by hand in class. (Consider the accessibility needs of students who may require accommodations.)
- call on students in class to explain or justify elements of their work.
- ask students to use ChatGPT to generate material, and then ask them to critique GPT’s response.
- request citations in all written assignments, and if feasible, spot check them—the accuracy of ChatGPT’s citations is one of its gravest shortcomings.
- talk to your colleagues about ideas for your discipline. Different disciplines, such as computer science, history, language studies and visual studies may be developing new norms of pedagogy.
Can generative AI systems respond to multiple-choice or short answer questions?
Yes. If you use multiple-choice quizzes/tests, assume that generative AI systems will be able to answer the questions unless they pertain to highly specific subjects; new knowledge; or the specifics of classroom discussions, the content of which cannot be found on the internet. Some instructors may wish to test this by using their multiple-choice/short answer assessments as prompts, and reviewing ChatGPT’s responses.
Can I use ChatGPT or other AI tools to assess (i.e., grade) student work?
The University asks that you not submit student work to any third-party software system for grading, or any other purpose, unless the software is approved by the University. A completed assignment is the student’s intellectual property (IP), and should be treated with care.
If a student is permitted to use ChatGPT on my assignment, how should they cite it?
This question is still being actively debated by the global academic community. We expect to see standards of practice emerge in the coming months. The MLA has some guidance on the general question of citing AI output here. However, this guidance predates ChatGPT, and may become obsolete as these new tools take on a greater presence in academic writing.
What recommendations should I offer my students for creating a ChatGPT account (or any other account)?
When creating an account (ChatGPT or otherwise), websites may ask for personal information such as phone number, address, email, etc. If this is the case with any applications you wish to use with your students, you must first consult uOttawa IT to request a security assessment.
In the case of ChatGPT, only an email and password are asked for, so you don't need to make such a request. However, we recommend that you inform your students that they should not use their uoAccess credentials. Indeed, the email and password are permanently stored on the provider's server, which could be hacked. With an account created specifically for each application, we limit the possibility of access to multiple systems in case of hacking.
Is ChatGPT the only generative AI system that my students might be using?
No. Large Language Model (LLM) technology is at the heart of a variety of generative AI products that are currently available, including writing assistant programs (e.g., Jasper, Writer, Moonbeam), image creation programs (e.g., DALL-E 2, Midjourney, Stable Diffusion) and programs to assist people who are creating computer code (e.g., GitHub Copilot).
It is also possible for you to build a system which utilizes this underlying technology (GPT-3 or another LLM) if you are interested in doing so. It is also worth noting that there are a variety of products (online and mobile apps) that have popped up which use GPT-3 or other LLM technology and require paid subscriptions. Some add additional features such as editing tools and templates. However, there are others that do nothing more than the free version but are meant to fool people into paying for a service that is currently free.