The Tool That Changed the Conversation
ChatGPT launched on November 30, 2022, and reached one million users in five days. For context, it took Netflix 3.5 years to reach that milestone. The speed of adoption wasn’t driven by marketing — it was driven by something more fundamental: people encountered it, tried it, and immediately had an opinion. The product had a way of making the abstract concrete that most technological milestones don’t.
What made it feel different from previous AI demonstrations wasn’t just the capability — it was the interface. A simple chat window that you could ask anything, in plain English, and receive a coherent answer. No commands, no syntax, no technical knowledge required. For the first time, a significant slice of the general public had direct access to something that had previously existed only in research papers and technology previews.
What It Actually Is, Without the Hype
ChatGPT is a large language model (LLM) — a type of artificial intelligence trained to predict what text should come next given what has come before. That sounds reductive, but the scale at which this prediction operates produces something that looks and feels remarkably like understanding. The model has processed a significant fraction of human writing and has developed internal representations of language, facts, reasoning patterns, and style that allow it to generate text that is contextually appropriate, grammatically correct, and often genuinely useful.
It does not “think” in the way humans think. It has no goals, no consciousness, no understanding of the world in a philosophical sense. What it has is a very sophisticated statistical model of how language is used to describe the world, which it applies to generate responses. This distinction matters more in some contexts (medical advice, factual queries, legal questions) than others (writing assistance, brainstorming, explanation), but it matters everywhere.
Why It Confidently Gets Things Wrong
The phenomenon researchers call “hallucination” — ChatGPT generating plausible-sounding but factually incorrect content — is one of the most important things to understand about the technology. Because the model is optimised for generating coherent, contextually appropriate text rather than factual accuracy, it will produce confident-sounding errors when it lacks information or when the pattern-matching that drives its generation leads it astray.
The practical implication is significant: ChatGPT is very good at tasks where you can evaluate the output yourself (writing, code, explanation, brainstorming), and risky for tasks where you’re relying on it to know something you don’t (specific facts, citations, legal or medical specifics). The tool is not a search engine. Treating it as one produces unreliable results.
The Opinions and Why They’re All Partly Right
The range of positions on ChatGPT — “it’s going to replace all knowledge workers” to “it’s just autocomplete with good PR” — is unusually wide even by the standards of new technology discourse. Both extremes contain partial truths. The “just autocomplete” framing underestimates how useful very good pattern matching over vast knowledge is for practical tasks. The “it will replace everything” framing overestimates the tool’s reliability, judgement, and ability to operate in novel situations beyond its training distribution.
The more useful frame is augmentation: ChatGPT makes people faster at tasks that require drafting, explaining, restructuring, and generating options. It doesn’t replace the judgement required to evaluate, select, and refine what it produces. The people who will benefit most are those who develop the skill of working with it effectively, not those who either dismiss it or outsource their thinking to it entirely.
What It’s Actually Good At
- First drafts: Getting words on a page quickly that you then edit
- Explanation: Simplifying complex concepts in accessible language
- Code generation: Writing functional code for well-defined problems
- Brainstorming: Generating options, angles, and possibilities to react to
- Summarisation: Condensing long texts into key points
- Tone adjustment: Rewriting content for different audiences or registers
Key Takeaways
- ChatGPT is a large language model — it predicts text, it does not “think” or “know” in a human sense
- Hallucination is a feature of how the model works, not a bug that will be fully fixed
- It’s excellent for tasks where you can evaluate the output; risky for factual queries where you can’t
- The most accurate frame is augmentation, not replacement — it makes skilled people faster
- The people who benefit most are those who learn to work with it effectively, not those who outsource their thinking entirely
Sources
- Brown, T. et al. (2020). Language Models are Few-Shot Learners. NeurIPS.
- Bender, E. et al. (2021). On the Dangers of Stochastic Parrots. FAccT.
- Mollick, E. (2023). Co-Intelligence. Portfolio/Penguin.