Discover more from Next Small Things
What do Large Language Models tell us about ourselves?
I was so proud of myself. I had the foresight to request my local library for the very trendy Elon Musk biography even before it was released. But now that my turn has finally arrived at the head of the queue, I am strangely shunning it for the very unfashionable 2010 book by James Gleick called The Information
I was fully expecting The Information to be about computers and IT, but instead I got sucked into a fascinating recounting of the history of Writing. It seems apropos now that I am in the midst of Write of Passage again.
“Frequently they speak voicelessly the utterances of the absent.”
I was vaguely aware of Socrates’ skepticism about writing and the irony that it was Plato's written account that made Socrates famous to this day. Now, we don’t know if Socrates actually said the above quote. But, we do know that Socrates was not just worried that writing would kill the oral tradition.. In embracing writing, he feared we would lose a spontaneous back-and-forth (socratic dialog) that often yields deeper insights. He was worried our memories would shrivel through lack of use.
This ambiguity and even reluctance to what we now universally regard as such a seminal and powerful invention makes me wonder about our current age and our foreboding and ambiguity towards the very recent invention of Large Language Models such as ChatGPT.
I don’t have any crystal ball (guiltily returning the Musk biography unread) as to what Large Language Models may portend or how they may shape the future. But perhaps looking at the history of Writing and how it has transformed us may offer a perspective.
Before writing, we did not need words like “beginning” and “ending”. We started categorizing and differentiating things only after we started writing. Just like a fish may not grok that they’re in water, writing is so intrinsic in our culture that it’s hard for us to fathom its transformative influence.
“It is a twisting journey from things to words, from words to categories, from categories to metaphor and logic”
Because of writing, we are able to think in complex multiple sequences. This brought forth logic and then the precision language of Math. Whentalks about how writing improves his thinking, I now realize it is true for us all.
And yet, in the most basic form, writing is basically putting one word after another. How we string them together, and how we imbue meaning from it, is a wondrous, beautiful form of mass hallucination.
Large Language Models (LLMs) are also simplistically about putting one word after another. In fact, through the billions of dollars spent on training LLMs, all tech companies are basically teaching their models that it is more appropriate that one particular word follows a second rather than some other word. We just reinforce this “lesson” trillions of times, each time hopefully improving the model a minuscule fraction.
And yet, we see how literate and fluent LLMs are, even in these nascent days.
It makes me wonder, what can Large Language Models tell us about our own intelligence?
When my teenager is struggling through a tough calculus problem, my standard advice is to think through the problem in a step-by-step process. It turns out this advice is also directly applicable to LLMs! You may have heard that ChatGPT is very bad at math. But it turns out that just telling it to think through the problem in a step-by-step fashion, it becomes much more capable.
Did we also evolve to become intelligent because our survivability improved when we predicted the next occurrence, the next event, even the next word as we developed language? Can our intelligence be viewed as just a bunch of neurons whose weights and activations have been shaped by millions of years of evolution and millenia of culture? Or is there something more magical? How much is the sum of human knowledge just basically accretions from step-by-step thinking?
As we learn to live with LLMs, how will they change us humans? What new concepts will develop akin to “beginning”, “ending”, logic and math?
@PranavPiyush suggests we look to science fiction. Would it be like a mentor/teacher as in Neal Stephenson’s Diamond Age: Or, a young lady’s primer? Would it change the way we view relationships, as depicted in the movie Her? My only guess is that it will be something no one has yet imagined, just as no one could imagine how Writing has totally transformed us starting a few millennia ago.charitably describes my musings as “tantalizing”. The word captures my feelings. I am awed by these strange developments in artificial intelligence, but feel too dazzled to understand them. Fortunately, I’m attending AI.Engineer this week and TED AI next week. As incongruous as it may seem, San Francisco in 2023, amidst the free fentanyl and free-flowing poop, is like Athens at the time of Socrates and Plato, or the coffeehouses in London when Newton presided. Hopefully, these events will leave me a little less mesmerized and a little more illuminated, one small step at a time. If so, I will have lots to report - stay tuned!
Thank you and @monicaqfm for your generous and helpful feedback!