Skip to Main Content

Student Guide to AI

A guide to using ChatGPT and other Generative AI in education.

Fact-checking is always needed

AI "hallucination"

The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic. In other words, generative AI is programmed to guess what will probably come next in the sentence, and not as a point of fact based upon evidence. 

Which models are less prone to this?

GPT-4 (the more capable model behind ChatGPT Plus and Bing Chat) has improved and is less prone to hallucination. According to OpenAI, it's "40% more likely to produce factual responses than GPT-3.5 on our internal evaluations." But it's still not perfect. So verification of the output is still needed.

ChatGPT often makes up fictional sources

One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. Read Techopedia's definition of "AI Hallucination" in their TechDictionary. You can also ask a librarian for help in finding actual sources.

I can’t find the citations that ChatGPT gave me. What should I do?: Review this FAQ from the Univerity of Arizona Libraries.

There is progress in making these models more truthful

However, there is progress in making these systems more truthful by grounding them in external sources of knowledge. Some examples are Bing Chat and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used could also contain misinformation or disinformation. But at least with Bing Chat and Perplexity, you can link to the sources used to begin verification.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License unless noted otherwise .