General Guidelines for Use from the CTLT
If you want to explore using such an application, keep these guidelines in mind:
What is an online writing assistant powered by an artificial intelligence application?
Can I use it for my coursework at UNBC?
Are there any issues relating to privacy that I should take into consideration?
Are these tools considered trustworthy?
Question: What is an online writing assistant powered by an artificial intelligence application?
Writing assistants driven by machine learning and artificial intelligence developments are tools that use natural language processing techniques to respond to user-generated prompts. The user can pose a question or provide a prompt, and the assistant will reply using natural language. These writing assistants, or “bots”, are quite versatile and can use prompts to produce letters, essays, poetry, lesson plans, scripts or screenplays, computer code, and even draft quizzes, practice exams, or outlines. They can also analyze text and make suggestions.
Such applications can be a useful tool for brainstorming or exploring an idea, and, with more development, will be excellent tools for helping to generate an outline, and possibly for proofreading purposes, but students should understand what the bot is doing before attempting to use it extensively.
The bot can only write about the information that has been fed into it, and the purpose of academic writing is, ultimately, to communicate NEW research and perspectives. Furthermore, when asked to write about something it has not been fed information about, the bot creates information to fill gaps in its knowledge base, up to and including fabricating references in the reference list, so students cannot blindly trust the output. See the section on Trustworthiness.
Question: Can I use it for my coursework at UNBC?
If your instructor has not clearly communicated their expectations for acceptable use of these technologies in your course(s) it is very important that you confirm those expectations. Some instructors may choose to allow their use in the course or in specific assignments. Some may prohibit their use. If an instructor has communicated that these technologies are not acceptable in a course or a specific assignment you risk being held responsible for an academic misconduct violation if found to be using these technologies.
Question: Are there any issues relating to privacy that I should take into consideration?
These tools require the user to submit data which is then processed by the AI. In many cases, you will not have the option to request deletion of data that is submitted. That data might then be used for other purposes or in other contexts, quite possibly in ways that you do not want, or that are not allowed within the ethical data practices of your project. Remember that the application is generating responses based on the information that is fed into it—your queries become part of its knowledge-base too, which means your information may become part of the output it gives to another user! If your information is sensitive, there could be serious ramifications to your data being used in this way!
Before using any such application, make sure you review the privacy policy and confirm that your data will be handled securely and within the ethics boundaries of your project. Think very carefully before feeding any research information into one of these assistants, and do not feed any queries that contain patient, client, or participant information into one of these services, even if the service seems to have a robust privacy policy.
Contact UNBC’s Privacy Officer if you have any questions about how to best handle research data.
Question: Are these tools considered trustworthy?
These applications are trained by uploading datasets, many of which were pulled from the internet and not necessarily curated. These bots can only write about information that was already uploaded into them, meaning a topic on something that was not included in the information that was fed into the bot, or a topic on something recent that has happened since the bot was last updated with information, may result in inaccuracies in the output. Everything the application produces must be checked carefully for accuracy or misleading information (Take the ROBOT Test). Google itself was caught off guard by not properly fact checking their application’s output!
The FAQs for ChatGPT (one of the first applications to reach mass use and popularization) address this by stating that ChatGPT “has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content” (Natalie, para. 4).
Because the responses are based on the data that is fed into them, their responses may reflect the biases of the humans who wrote the training data. A properly curated dataset could deliberately shape the responses generated by the application, perhaps in dangerous ways. One should never blindly trust that the responses provided by these are applications are factual or benign.
Keep in mind that these AI tools are language generation tools, not search engines!
When these applications run into a gap in their knowledge-base, they invent information in order to fill the gaps. At times they may even invent an entire reference list that appears to be complete and well-formatted, but the sources listed do not exist. See “How to Talk to ChatGPT, the Uncanny New AI-Fueled Chatbot That Makes a Lot of Stuff Up” (Ropek, 2022).
All of the output provided by these applications must be fact-checked for accuracy, or students run the risk of providing misleading information that could lose marks, or get them into major trouble with a plagiarism charge.
Information copied and adapted with gratitude from:
UNBC CTLT. (2024). A Student guide to learning @ UNBC. BCCampus. https://pressbooks.bccampus.ca/unbcstudents/front-matter/introduction/