Overview
AI is many things, but for finding research materials, "AI" refers to the Large Language Models (LLMs) like ChatGPT, Claude, Perplexity, and dozens of others that can appear to be answering questions that you pose.
In reality, the LLMs are searching a snapshot of the web, sometimes the live web, (and, increasingly, private sites from licensing deals with content providers). This is the LLM training data. The LLM is scooping up material from its training data (sometimes a little dated) that appears to match the search terms used in your question and then using text prediction to compose a response in narrative form.
On the other hand, LLMs are not "thinking" about the meaning of your question and they don't "understand" context in a human way. They are simply doing a very fast search, based on your "prompts" (how you ask the question), similar to how a search engine parses your question (applying an algorithm to the terms you enter and producing results based on that algorithm). The LLM is finding resources, but it is not evaluating the materials it finds. Anthropic publishes the 'system prompts' that make Claude tick | TechCrunch
In attempting to compose the narrative response, LLMs will also fill in gaps by creating text that is not supported by the underlying information. This results in mistakes and sometimes fairly serious mistakes. AI engineers call these mistakes "hallucinations" but the rest of us just call it "making stuff up." The developers of LLMs are working on ways to reduce this problem, but cannot eliminate it completely.
The Fever Dream of Imminent Super Intelligence is finally breaking (NYT op-ed, 9/3/2025)
Should you use AI Large Language Models in your research?
Even the creators of these tools, if they are being honest, say that users should fact-check the output of any response. In other words, to use these tools, you must search again, and corroborate every statement, every reference, and every conclusion. This is not a time saver! AI Search has a Citation Problem (March 2025).
AI LLMs that regularly provide references are a step in the right direction, but you will need to verify the references, and read the material.
Finally, in most cases, the AI algorithm is making choices (scooping up content) that may not be your choices. The material you find without AI tools, and the meanings (conclusions) you make from the material, will be better than machine-generated output because they will be uniquely yours. You will be able to describe your process and defend your conclusions.
OpenAI, the makers of ChatGPT, puts it this way: Does ChatGPT tell the truth?
- Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a “hallucination” in the literature).
 - It can even make up things like quotes or citations, so don't use it as your only source for research.
 - Sometimes it might say there's only one answer to a question when there's more to it, or misrepresent different sides of an argument, mistakenly giving each side equal weight.
 
There are other tools, and other search strategies, that don't rely on AI-generated responses. Searching is the fun part! In addition to today's instruction session, I'm happy to meet with you anytime to help you find resources you can rely on.
Avoiding AI glut in your search results
For better or worse, most of the major search engines are now incorporating AI LLMs into their models. In some search engines there is a way to avoid the AI content in your search results Whether you choose to avoid AI results or not, it's useful to compare the search with, and without, AI generated content.
Google search engine:
After composing your search, look under "More" to find "Web." That is the web search, without AI results.

Duck Duck Go search engine:
Select the settings cog. Uncheck AI related tools
