Skip to Main Content

AI Literacy: UNBC Guidance

UNBC Guidance on the acceptability of using generative AI in coursework

 

UNBC Guidance on the acceptability of using generative AI in coursework

This guidance material has been prepared by the ‘AI Task Force’, established by the Vice-President Academic and Provost, to help address a need for clarity in course outlines around the expectations regarding use of generative AI (GenAI) by students.

This guidance is not a university policy document. A policy document is binding on members of the university community and requires formal approval through university governance. This material is for guidance, and it describes a general approach to teaching and learning activity. This guidance in itself is not binding, and not enforceable, and it has not been formally approved by Senate or the Board of Governors. To read UNBC's "Guidance on the acceptability of using generative AI in coursework" go to: https://www.unbc.ca/provost/guidance-acceptability-using-generative-ai-coursework.

Further Support:

If you have any ideas for how UNBC can enhance its guidance and communication around AI in education, please feel free to share this at aitaskforce@unbc.ca. This content will be reviewed and revised as appropriate to reflect current guidance. 

Information for Students and Instructors

Information copied and adapted with gratitude from: University of Saskatchewan Library. GenAI University Library Guide (2025). https://libguides.usask.ca/gen_ai/understanding Licensed under a Creative Commons Attribution - ShareAlike 4.0  International License.

 

Welcome to the UNBC Library Guide on Generative Artificial Intelligence.

Generative Artificial Intelligence (GenAI) is transforming our work, learning, and daily lives. To harness this technology effectively and responsibly, though, we must understand what it is, how it works, and how to use it ethically.

In an academic setting, we should consider:

  1. How can GenAI enhance research, teaching, and learning?
  2. What challenges does it present and what are its limitations?
  3. How can we use it ethically and responsibly?

This guide explores the academic applications of GenAI tools, offering insights into how they can enhance research, teaching, and learning. It also examines the challenges and limitations of AI integration while emphasizing ethical and responsible use.


What is Generative Artificial Intelligence?

Ask ChatGPT what AI is, and it provides a coherent response:

AI stands for Artificial Intelligence. It refers to the simulation of human intelligence in machines that are programmed to think and learn like humans and mimic their actions. AI involves the development of algorithms and systems that can perform tasks that would typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.


ChatGPT is an example of a large language model (LLM) within the category of Generative Artificial Intelligence (GenAI). Unlike predictive or analytic AI models, generative AI models produce or generate content by analyzing patterns and relationships in input data. In a process called machine learning, these models use complex algorithms to recognize patterns in vast amounts of training data (datasets), allowing them to create coherent human-sounding outputs. These outputs allow for a wide range of applications, from drafting articles and composing music to generating unique artwork and even producing realistic human images.

Despite these capabilities, Generative AI has limitations. It relies heavily on the quality and quantity of the data it is trained on, which may contain inherent biases and inaccuracies. Furthermore, the use of GenAI without human oversight raises ethical concerns, such as plagiarism, the creation of deepfake content, and the erosion of trust in digital media.

AI in a Snap - A 2-Minute Intro for Novices GenAI in a Nutshell - A 20-Minute Guide for Beginners
The Importance of AI Literacy

Engaging responsibly with Generative AI requires both human oversight and AI literacy. This balanced approach—combining knowledge of AI capabilities and limitations with ethical supervision and regulation—ensures AI technologies are used responsibly and effectively. It allows society to harness the potential of generative AI, mitigate associated risks, and promote responsible and sustainable integration into our personal, academic, and professional lives.

AI literacy focuses on understanding and interacting with AI technologies and can typically be grouped into two main categories:

Basic AI Literacy Critical AI Literacy

Basic AI Literacy emphasizes the practical skills and knowledge needed to interact with AI responsibly. It includes understanding fundamental AI concepts, recognizing its potential benefits and risks, and evaluating its societal impact. It also involves effectively prompting AI and assessing the AI outputs. These basic skills enable the confident, safe, ethical, and effective use of AI technologies.

Critical AI Literacy involves a deeper examination or critical analysis of the ethical, cultural, and societal implications of AI. This includes understanding issues related to bias, privacy, accountability, and the potential impacts of AI on social structures, equity, and individual rights. Critical AI literacy enables individuals to navigate complex AI-related challenges thoughtfully, participate in informed discussions, and contribute to policies and regulations that promote fairness, transparency, and social good.

Disclaimer: Portions of this guide were developed with the assistance of GenAI. Specifically, Pi.ai, ChatGPT and Perplexity were used to generate draft content, brainstorm ideas, and assist in organizing and formatting tables, decision trees, and other structured elements. All AI-generated content was thoroughly reviewed, edited, and verified by Library employees to ensure accuracy, relevance, and coherence and, where necessary, supplemented with additional research and expertise. The content provided is for educational purposes only and should not be considered a substitute for policy, advice, guidance or requirements around AI use from instructors or the institution. The guide is only intended to promote an understanding of generative AI and its ethical implications in educational settings. Users should exercise critical thinking and independent judgment while engaging with the content.

Information for Instructors

https://www.unbc.ca/provost/guidance-acceptability-using-generative-ai-coursework

Instructors

  1. As an instructor, you have the freedom to choose when and how GenAI is used in your teaching.
  2. Instructors should explicitly convey to students, via course outline/syllabus, in-class discussions, and assignment guidelines, whether and to what extent the use of GenAI is permissible within the course.
  3. If your course outline/syllabus or assignment guidelines are silent on the permissibility (or not) of GenAI within the course, students might reasonably assume its use is not restricted.
  4. Instructors might reasonably require the use of GenAI tools if those tools have been authorized through UNBC’s Privacy Impact Assessment (PIA) process.
  5. As an instructor, if you are encouraging or requiring students to use GenAI, and the student is not able to do so (e.g., due to accessibility issues), then you might reasonably be expected to offer, upon request, an alternative assignment or assignment method.
  6. Instructors are responsible for informing students at the beginning of each course of any specific criteria (such as on the use of GenAI) related to Academic Honesty or Integrity that may be pertinent.
  7. UNBC does not endorse the use of AI detectors.

Teaching assistants should be familiar with the course outline/syllabus and assignment guidelines, and they should clarify with course instructors expectations around the use of GenAI. 

AI, Machine Learning, and Generative Technologies

“Artificial intelligence” is a catch-all term that encompasses a wide range of machine learning technologies that use large data sets – collections of information – to make predictions or conclusions. “Generative AI” is the class of tools where the AI doesn’t make decisions or predictions but instead appears to create – or generate! – something like an image, a paragraph, a video, or a sound file. Below are a few of the most frequently asked questions.

 

Is it cheating if my students use generative technologies?

This will depend on the parameters of the assignment and the learning objectives of the course.  Some instructors may wish to engage with these technologies in their course activities either throughout their course, or within specific assignments.  Some instructors may wish to prohibit their use.

It is important to discuss the potential uses of these technologies with your students and clearly communicate where use is acceptable or unacceptable.

 

What can I do to encourage students to not use generative technologies in my courses?

Some instructors are concerned about students using generative technologies to, for example, write essays or other written assessments for them. There are strategies for designing assessments that are more resistant to generative technologies:

  1. Clearly outline expectations.  If you wish to allow or prohibit use of these platforms, clearly explain why in your course outline and discuss these expectations during class.
  2. Evaluate students on process, not only on the final product. You might want to collect outlines or research proposals for evaluation and place less weight on a final paper assignment.
  3. Include components of self-reflection (including reflection on prior learning or the student’s own life or work contexts) in assessments.
  4. Reflect on your desired learning outcomes for your assessments. Consider whether an essay/paper assignment as an evaluative form reflects the learning objectives in your class. Could you explore project-based learning or “unessay” instead?
  5. Ask students to complete certain work during class time. For example, utilize pre and post polls to capture student reflections about material learned. This strategy can also help students prepare for activities that build upon this, such as group work or discussions.

It’s also important to keep open lines of communication with students about these tools. Consider exploring the limitations of these technologies with your students by, for example, asking it to create a bibliography for an assignment and then checking whether the sources it provides are reliable or even fabricated.

Is there a technology that can “catch” usage of these technologies?

In essence, no. The existing tools for detecting generative outputs have extremely high false positive ratings and have not been extensively independently tested.  There are also alarming issues with these detection technologies impacting diversity, equity, and inclusion.  Given the speed with which these technologies develop and change, seeking a technological solution is entering an arms race that we cannot win. Be wary of claims made by technology companies in unsolicited emails and marketing campaigns. Revising our pedagogies with strategies that make for more meaningful learning is a better approach.

  • Update your syllabus.

    • Teaching, Learning, and AI Technologies is a self-enrol workshop space maintained by the CTLT.  It has a collection of suggested syllabus language you may choose to use in your course outlines.
    • This crowdsourced collection is created for the purposes of sharing and helping instructors see the range of policies being used by post-secondary educators to help in the development of their own for navigating generative technologies.

Are there strategies I can use to talk with students about academic integrity.

Ensure you discuss your expectations regarding these technologies with your students. Consider updating it to be more student-centered (see Zinn 2021 template). Discuss why academic integrity is essential in their learning process.

Be transparent about assignments.

Reconsider your approach to grading.

  • “Research shows three reliable effects when students are graded: They tend to think less deeply, avoid taking risks, and lose interest in the learning itself” (Kohn, 2006, para. 4).
  • Try ungrading.

Shift from extrinsic to intrinsic motivation.

  • Students are more likely to cheat when “the class reinforces extrinsic (i.e., grades), not intrinsic (i.e. learning), goals.” (UC San Diego, 2020, para. 6).
  • Consider how you might increase intrinsic motivation by giving students autonomy, independence, freedom, opportunities to learn through play, and/or activities that pique their interest based on their experiences and cultures.
  • Learn more about motivational theories in education from Dr. Jackie Gerstein.

Use these technologies as educational tools.

  • Before you ask students to use any of these tools for an assignment please ensure you understand the potential privacy impacts of the platform.  Teaching, Learning, and AI Technologies is a self-enrol workshop space maintained by the CTLT.  It outlines the privacy considerations you need to consider within the BC post-secondary context related to FIPPA regulations.
  • Engage students in critiquing and improving generative outputs:
    • Pre-service teachers might critique how a generated lesson plan integrates technologies using the Triple E Rubric or examine whether it features learning activities that support diversity, equity, accessibility, and inclusivity.
    • Computer science students might identify potential ways to revise  generated code to reduce errors and improve output.
    • Analyze how generated text impacts different audiences.
  • Help students build their information literacy skills:
    • Ask students to conduct an Internet search to see if they can find the original sources of text used to generate output.
    • Have students generate prompts and compare and contrast the output

 

Information copied and adapted with gratitude from:

UNBC CTLT. (2024). A Student guide to learning @ UNBC. BCCampus. https://pressbooks.bccampus.ca/unbcstudents/front-matter/introduction/

Information for Students

https://www.unbc.ca/provost/guidance-acceptability-using-generative-ai-coursework

Students

  1. Unless otherwise stated, students should assume use of GenAI might be restricted.
  2. If students are ever unsure about the use of GenAI within their course, they should re-read the course outline/syllabus, and then reach out to the course instructor for further clarification.
  3. Students should always be mindful of the privacy implications around the sharing of information on digital platforms.
  4. Students are responsible for ensuring that they are familiar with and apply the general standards and requirements of Academic Honesty and Academic Integrity, including the requirement to declare/cite sources.  
  5. Students should review and familiarize themselves with the course content via the outline/syllabus before continuing with the course.
  6. Students might only reasonably expect an instructor to provide an alternative assignment to one requiring GenAI use upon request if they can present clear and reasonable rationale as to why they are not able to use the tool: e.g., access issues that would put them at a disadvantage to other students.

 

General Guidelines for Use from the CTLT

If you want to explore using such an application, keep these guidelines in mind:

  • Use the application as a tool to assist you in your research and writing, but not as a replacement for critical thinking and analysis.
  • Confirm with your instructor about whether the use of the tool will be acceptable for your assignment. Some instructors may have a zero-tolerance policy, and you could be severely punished for use of an unauthorized tool. Some instructors may allow you to use these tools as long as you ensure to clearly indicate where content has been generated. Always check with your instructor before proceeding.
  • Ensure that you appropriate cite and reference any output generated by such an application. Make sure you double-check that those references exist, and that they say what the application is claiming they say!
  • Be aware of the UNBC academic integrity policies and ensure that your usage of the application is not in violation of any part of those policies.
  • Make sure the final product is your work, and not just copied from the application’s output. Use the output as inspiration, guidance, or quality control—not to do the work for you. 

Question: What is an online writing assistant powered by an artificial intelligence application?

 

Writing assistants driven by machine learning and artificial intelligence developments are tools that use natural language processing techniques to respond to user-generated prompts. The user can pose a question or provide a prompt, and the assistant will reply using natural language. These writing assistants, or “bots”, are quite versatile and can use prompts to produce letters, essays, poetry, lesson plans, scripts or screenplays, computer code, and even draft quizzes, practice exams, or outlines. They can also analyze text and make suggestions.

Such applications can be a useful tool for brainstorming or exploring an idea, and, with more development, will be excellent tools for helping to generate an outline, and possibly for proofreading purposes, but students should understand what the bot is doing before attempting to use it extensively.

The bot can only write about the information that has been fed into it, and the purpose of academic writing is, ultimately, to communicate NEW research and perspectives. Furthermore, when asked to write about something it has not been fed information about, the bot creates information to fill gaps in its knowledge base, up to and including fabricating references in the reference list, so students cannot blindly trust the output. See the section on Trustworthiness.

Question: Can I use it for my coursework at UNBC?

If your instructor has not clearly communicated their expectations for acceptable use of these technologies in your course(s) it is very important that you confirm those expectations.  Some instructors may choose to allow their use in the course or in specific assignments.  Some may prohibit their use.  If an instructor has communicated that these technologies are not acceptable in a course or a specific assignment you risk being held responsible for an academic misconduct violation if found to be using these technologies.

Question: Are there any issues relating to privacy that I should take into consideration?

These tools require the user to submit data which is then processed by the AI. In many cases, you will not have the option to request deletion of data that is submitted. That data might then be used for other purposes or in other contexts, quite possibly in ways that you do not want, or that are not allowed within the ethical data practices of your project. Remember that the application is generating responses based on the information that is fed into it—your queries become part of its knowledge-base too, which means your information may become part of the output it gives to another user! If your information is sensitive, there could be serious ramifications to your data being used in this way!

Before using any such application, make sure you review the privacy policy and confirm that your data will be handled securely and within the ethics boundaries of your project. Think very carefully before feeding any research information into one of these assistants, and do not feed any queries that contain patient, client, or participant information into one of these services, even if the service seems to have a robust privacy policy.

Contact UNBC’s Privacy Officer if you have any questions about how to best handle research data.

Question: Are these tools considered trustworthy?

These applications are trained by uploading datasets, many of which were pulled from the internet and not necessarily curated. These bots can only write about information that was already uploaded into them, meaning a topic on something that was not included in the information that was fed into the bot, or a topic on something recent that has happened since the bot was last updated with information, may result in inaccuracies in the output. Everything the application produces must be checked carefully for accuracy or misleading information (Take the ROBOT Test). Google itself was caught off guard by not properly fact checking their application’s output!

The FAQs for ChatGPT (one of the first applications to reach mass use and popularization) address this by stating that ChatGPT “has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content” (Natalie, para. 4).

Because the responses are based on the data that is fed into them, their responses may reflect the biases of the humans who wrote the training data. A properly curated dataset could deliberately shape the responses generated by the application, perhaps in dangerous ways. One should never blindly trust that the responses provided by these are applications are factual or benign.

Keep in mind that these AI tools are language generation tools, not search engines!

When these applications run into a gap in their knowledge-base, they invent information in order to fill the gaps. At times they may even invent an entire reference list that appears to be complete and well-formatted, but the sources listed do not exist. See “How to Talk to ChatGPT, the Uncanny New AI-Fueled Chatbot That Makes a Lot of Stuff Up” (Ropek, 2022). 

All of the output provided by these applications must be fact-checked for accuracy, or students run the risk of providing misleading information that could lose marks, or get them into major trouble with a plagiarism charge.

Information copied and adapted with gratitude from:

UNBC CTLT. (2024). A Student guide to learning @ UNBC. BCCampus. https://pressbooks.bccampus.ca/unbcstudents/front-matter/introduction/