1 item with this tag.
The discourse around AI and human cognition tends to focus on differences, but what happens when we invert the question and use LLM terminology to explore the similarities between AI and human thinking? This post examines parallels between AI cognitive architecture and human thinking—context windows, training data bias, tokenisation, temperature, hallucination, and pattern matching—not to claim that humans are language models, but to ask what these similarities reveal about our own cognitive processes and why we are so invested in denying them.