The Simple Explanation
Imagine someone who has read every book, article, and website ever published. They did not memorize any of it word for word, but they absorbed deep patterns about how language works -- how sentences are structured, how ideas connect, how different topics relate to each other, and how different types of writing sound.
When you ask this person a question, they do not look up the answer. They generate a response based on everything they absorbed. That is essentially what a large language model does. It learned patterns from text. When you give it a prompt, it produces the most relevant continuation based on those patterns.
What It Does Not Do
An LLM does not search the internet when you ask it a question. It does not have a database of facts it looks up. It does not reason the way humans reason. It predicts text based on learned patterns. This distinction matters because it explains both why LLMs are incredibly useful and why they sometimes produce incorrect information.
When an LLM generates a confident-sounding answer that is factually wrong, that is called hallucination. The model predicted that those words were the most likely continuation of your prompt -- but likely is not the same as accurate. This is why you should treat LLMs as thinking partners, not fact databases.
An LLM is a thinking partner, not an encyclopedia. Use it to think through problems, draft content, and generate ideas. Verify critical facts independently. This is the same standard you would apply to advice from any smart colleague.
Why This Matters for Builders
You do not need to understand how an LLM works internally any more than you need to understand how a car engine works to drive. But knowing the basic concept helps you use AI more effectively.
When you understand that LLMs predict based on patterns, you understand why context matters so much. Generic input triggers generic patterns. Specific context triggers specific patterns. This is why a Business Context Document transforms AI output -- it changes what patterns the model draws from.
You also understand why prompt structure matters. Clear, well-organized prompts trigger clearer, more useful patterns. Vague prompts trigger vague patterns. The quality of your input directly shapes the quality of the output.
The Models Behind the Tools
Claude is built on Anthropic's large language model. ChatGPT is built on OpenAI's GPT model. Gemini is built on Google's model. The tools you use -- the chat interface, the Projects feature, the plugins -- are interfaces built on top of these underlying models.
Different models have different strengths because they were trained differently. Claude's training emphasizes helpful, harmless, and honest output. GPT's training emphasizes versatility. Gemini's training integrates with Google's data ecosystem. Same technology category, different implementations.