This guidance material has been prepared by the ‘AI Task Force’, established by the Vice-President Academic and Provost, to help address a need for clarity in course outlines around the expectations regarding use of generative AI (GenAI) by students.
This guidance is not a university policy document. A policy document is binding on members of the university community and requires formal approval through university governance. This material is for guidance, and it describes a general approach to teaching and learning activity. This guidance in itself is not binding, and not enforceable, and it has not been formally approved by Senate or the Board of Governors. To read UNBC's "Guidance on the acceptability of using generative AI in coursework" go to: https://www.unbc.ca/provost/guidance-acceptability-using-generative-ai-coursework.
If you have any ideas for how UNBC can enhance its guidance and communication around AI in education, please feel free to share this at aitaskforce@unbc.ca. This content will be reviewed and revised as appropriate to reflect current guidance.
Generative Artificial Intelligence (GenAI) is transforming our work, learning, and daily lives. To harness this technology effectively and responsibly, though, we must understand what it is, how it works, and how to use it ethically.
In an academic setting, we should consider:
This guide explores the academic applications of GenAI tools, offering insights into how they can enhance research, teaching, and learning. It also examines the challenges and limitations of AI integration while emphasizing ethical and responsible use.
What is Generative Artificial Intelligence?
Ask ChatGPT what AI is, and it provides a coherent response:
AI stands for Artificial Intelligence. It refers to the simulation of human intelligence in machines that are programmed to think and learn like humans and mimic their actions. AI involves the development of algorithms and systems that can perform tasks that would typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
ChatGPT is an example of a large language model (LLM) within the category of Generative Artificial Intelligence (GenAI). Unlike predictive or analytic AI models, generative AI models produce or generate content by analyzing patterns and relationships in input data. In a process called machine learning, these models use complex algorithms to recognize patterns in vast amounts of training data (datasets), allowing them to create coherent human-sounding outputs. These outputs allow for a wide range of applications, from drafting articles and composing music to generating unique artwork and even producing realistic human images.
Despite these capabilities, Generative AI has limitations. It relies heavily on the quality and quantity of the data it is trained on, which may contain inherent biases and inaccuracies. Furthermore, the use of GenAI without human oversight raises ethical concerns, such as plagiarism, the creation of deepfake content, and the erosion of trust in digital media.
AI in a Snap - A 2-Minute Intro for Novices | GenAI in a Nutshell - A 20-Minute Guide for Beginners |
The Importance of AI Literacy |
Engaging responsibly with Generative AI requires both human oversight and AI literacy. This balanced approach—combining knowledge of AI capabilities and limitations with ethical supervision and regulation—ensures AI technologies are used responsibly and effectively. It allows society to harness the potential of generative AI, mitigate associated risks, and promote responsible and sustainable integration into our personal, academic, and professional lives.
AI literacy focuses on understanding and interacting with AI technologies and can typically be grouped into two main categories:
Basic AI Literacy | Critical AI Literacy |
---|---|
Basic AI Literacy emphasizes the practical skills and knowledge needed to interact with AI responsibly. It includes understanding fundamental AI concepts, recognizing its potential benefits and risks, and evaluating its societal impact. It also involves effectively prompting AI and assessing the AI outputs. These basic skills enable the confident, safe, ethical, and effective use of AI technologies. |
Critical AI Literacy involves a deeper examination or critical analysis of the ethical, cultural, and societal implications of AI. This includes understanding issues related to bias, privacy, accountability, and the potential impacts of AI on social structures, equity, and individual rights. Critical AI literacy enables individuals to navigate complex AI-related challenges thoughtfully, participate in informed discussions, and contribute to policies and regulations that promote fairness, transparency, and social good. |
Teaching assistants should be familiar with the course outline/syllabus and assignment guidelines, and they should clarify with course instructors expectations around the use of GenAI.
“Artificial intelligence” is a catch-all term that encompasses a wide range of machine learning technologies that use large data sets – collections of information – to make predictions or conclusions. “Generative AI” is the class of tools where the AI doesn’t make decisions or predictions but instead appears to create – or generate! – something like an image, a paragraph, a video, or a sound file. Below are a few of the most frequently asked questions.
Is it cheating if my students use generative technologies?
This will depend on the parameters of the assignment and the learning objectives of the course. Some instructors may wish to engage with these technologies in their course activities either throughout their course, or within specific assignments. Some instructors may wish to prohibit their use.
It is important to discuss the potential uses of these technologies with your students and clearly communicate where use is acceptable or unacceptable.
Some instructors are concerned about students using generative technologies to, for example, write essays or other written assessments for them. There are strategies for designing assessments that are more resistant to generative technologies:
It’s also important to keep open lines of communication with students about these tools. Consider exploring the limitations of these technologies with your students by, for example, asking it to create a bibliography for an assignment and then checking whether the sources it provides are reliable or even fabricated.
In essence, no. The existing tools for detecting generative outputs have extremely high false positive ratings and have not been extensively independently tested. There are also alarming issues with these detection technologies impacting diversity, equity, and inclusion. Given the speed with which these technologies develop and change, seeking a technological solution is entering an arms race that we cannot win. Be wary of claims made by technology companies in unsolicited emails and marketing campaigns. Revising our pedagogies with strategies that make for more meaningful learning is a better approach.
Ensure you discuss your expectations regarding these technologies with your students. Consider updating it to be more student-centered (see Zinn 2021 template). Discuss why academic integrity is essential in their learning process.
Information copied and adapted with gratitude from:
UNBC CTLT. (2024). A Student guide to learning @ UNBC. BCCampus. https://pressbooks.bccampus.ca/unbcstudents/front-matter/introduction/
General Guidelines for Use from the CTLT
If you want to explore using such an application, keep these guidelines in mind:
Question: What is an online writing assistant powered by an artificial intelligence application?
Writing assistants driven by machine learning and artificial intelligence developments are tools that use natural language processing techniques to respond to user-generated prompts. The user can pose a question or provide a prompt, and the assistant will reply using natural language. These writing assistants, or “bots”, are quite versatile and can use prompts to produce letters, essays, poetry, lesson plans, scripts or screenplays, computer code, and even draft quizzes, practice exams, or outlines. They can also analyze text and make suggestions.
Such applications can be a useful tool for brainstorming or exploring an idea, and, with more development, will be excellent tools for helping to generate an outline, and possibly for proofreading purposes, but students should understand what the bot is doing before attempting to use it extensively.
The bot can only write about the information that has been fed into it, and the purpose of academic writing is, ultimately, to communicate NEW research and perspectives. Furthermore, when asked to write about something it has not been fed information about, the bot creates information to fill gaps in its knowledge base, up to and including fabricating references in the reference list, so students cannot blindly trust the output. See the section on Trustworthiness.
Question: Can I use it for my coursework at UNBC?
If your instructor has not clearly communicated their expectations for acceptable use of these technologies in your course(s) it is very important that you confirm those expectations. Some instructors may choose to allow their use in the course or in specific assignments. Some may prohibit their use. If an instructor has communicated that these technologies are not acceptable in a course or a specific assignment you risk being held responsible for an academic misconduct violation if found to be using these technologies.
Question: Are there any issues relating to privacy that I should take into consideration?
These tools require the user to submit data which is then processed by the AI. In many cases, you will not have the option to request deletion of data that is submitted. That data might then be used for other purposes or in other contexts, quite possibly in ways that you do not want, or that are not allowed within the ethical data practices of your project. Remember that the application is generating responses based on the information that is fed into it—your queries become part of its knowledge-base too, which means your information may become part of the output it gives to another user! If your information is sensitive, there could be serious ramifications to your data being used in this way!
Before using any such application, make sure you review the privacy policy and confirm that your data will be handled securely and within the ethics boundaries of your project. Think very carefully before feeding any research information into one of these assistants, and do not feed any queries that contain patient, client, or participant information into one of these services, even if the service seems to have a robust privacy policy.
Contact UNBC’s Privacy Officer if you have any questions about how to best handle research data.
Question: Are these tools considered trustworthy?
These applications are trained by uploading datasets, many of which were pulled from the internet and not necessarily curated. These bots can only write about information that was already uploaded into them, meaning a topic on something that was not included in the information that was fed into the bot, or a topic on something recent that has happened since the bot was last updated with information, may result in inaccuracies in the output. Everything the application produces must be checked carefully for accuracy or misleading information (Take the ROBOT Test). Google itself was caught off guard by not properly fact checking their application’s output!
The FAQs for ChatGPT (one of the first applications to reach mass use and popularization) address this by stating that ChatGPT “has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content” (Natalie, para. 4).
Because the responses are based on the data that is fed into them, their responses may reflect the biases of the humans who wrote the training data. A properly curated dataset could deliberately shape the responses generated by the application, perhaps in dangerous ways. One should never blindly trust that the responses provided by these are applications are factual or benign.
Keep in mind that these AI tools are language generation tools, not search engines!
When these applications run into a gap in their knowledge-base, they invent information in order to fill the gaps. At times they may even invent an entire reference list that appears to be complete and well-formatted, but the sources listed do not exist. See “How to Talk to ChatGPT, the Uncanny New AI-Fueled Chatbot That Makes a Lot of Stuff Up” (Ropek, 2022).
All of the output provided by these applications must be fact-checked for accuracy, or students run the risk of providing misleading information that could lose marks, or get them into major trouble with a plagiarism charge.
Information copied and adapted with gratitude from:
UNBC CTLT. (2024). A Student guide to learning @ UNBC. BCCampus. https://pressbooks.bccampus.ca/unbcstudents/front-matter/introduction/