What is AI?

AI is many things, but for finding research materials, "AI" refers to the Large Language Models (LLMs) like ChatGPT, Claude, Perplexity, and dozens of others that can appear to be answering questions that you pose.

In reality, the LLMs are searching a snapshot of the web, sometimes the live web, and, increasingly, private sites (from licensing deals with content providers and sometimes without permission), and composing the results in narrative form.The narrative form is created through a highly complex and lightning fast "text prediction" system. In essence, although this is an oversimplification of the process, the LLM is scooping up material from its training data (sometimes a little dated) that appears to match the search terms used in your question and then using text prediction to compose a response in narrative form.

On the other hand, LLMs are not "thinking" about the meaning of your question and they don't "understand" context in a human way. They are simply doing a very fast search, based on your "prompts" (how you ask the question), similar to how a search engine parses your question (applying an algorithm to the terms you enter and producing results based on that algorithm). The LLM is finding resources, but it is not evaluating the materials it finds. Anthropic publishes the 'system prompts' that make Claude tick | TechCrunch

In attempting to compose the narrative response, LLMs will also fill in gaps by creating text that is not supported by the underlying information. This results in mistakes and sometimes fairly serious mistakes. AI engineers call these mistakes "hallucinations" but the rest of us just call it "making stuff up." The developers of LLMs are working on ways to reduce this problem, but cannot eliminate it completely.

.

Should you use AI Large Language Models in your research?

Even the creators of the these tools, if they are being honest, say that users should fact-check the output of any response. In other words, to use these tools, you must search again, and corroborate every statement, every reference, and every conclusion. This is not a time saver!

LLMs that regularly provide references are a step in the right direction, but you will need to verify the references, and read the material.

Finally, in most cases, the material you find without AI tools, and the meanings (conclusions) you make from the material, will be better than machine-generated output because they will be uniquely yours. You will be able to describe your reasoning and defend your conclusions.

OpenAI, the makers of ChatGPT, puts it this way: Does ChatGPT tell the truth?

  • Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a “hallucination” in the literature).
  • It can even make up things like quotes or citations, so don't use it as your only source for research.
  • Sometimes it might say there's only one answer to a question when there's more to it, or misrepresent different sides of an argument, mistakenly giving each side equal weight.

Avoiding AI glut in your search results

For better or worse, most of the major search engines are now incorporating AI LLMs into their models, but there is a way to eliminate the AI content.

The following instructions are taken from an Ars Technica article by Ron Amadeo - Updated

  1. In Firefox, you'll need to enable custom search engines. First type "about:config" into the address bar and hit enter. Paste in browser.urlbar.update2.engineAliasRefresh and hit the "plus" button. In Chrome, right click the address bar and select Manage Search Engines.
  2. Next: Go to Settings -> Search, scroll down to the search engine section, and hit "Add."
  3. Create a new search shortcut, call it Google Web, give it a shortcut or alias of gw, and use https://www.google.com/search?q=%s&udm=14 as the URL. Note: Checked on 10/7/2024. This still works.
  4. Whenever you wish to avoid the AI-generated content, begin your search with "gw" and hit the spacebar. You will now be searching only the web, AI free.