On Epistemic Hygiene
In which we discuss how we can help our children navigate a future of increased uncertainty and combat powerful forces that want to isolate them into personalized hamster balls of unchallenged truths
Last week my 8yo daughter and I learned how to make fluffy slime. Very good fun, highly recommended.
After a few hours, she had mastered the process and wanted to experiment with colors and portions on her own. I went to do laundry nearby.
At one point she says: “hmm, I wonder what color you get if you mix pink and purple. Daddy, do you know?” I answered her that I wasn’t sure. Then, unprompted, she asks louder “Hey Google, what color do you get when you mix pink and purple”. The Google home appliance in the kitchen answers her “Magenta”. She says: “uh, interesting… I wonder if that’s true, let’s try…”.
Cut scene.
Fade to this…
When I was still working in Google Research, I had the opportunity to chat with its best large language model (LLM) chatbot at the time (LaMDA). I came away very impressed by the propriety of language but also terrified by how confident its tone seemed to speak about things it didn’t seem to understand at all.
I wrote before about how “LLM fabulism” can be seen as a new state of discourse never experienced before: an entity that understands way more about languages than it does about the world interactions that made such languages emerge. This is not really possible in human development.
I have been trying to prepare my children to a post-factual world in which they need to learn to surf the epistemic uncertainty between “experts” and “do your own research”. I think of it as a form of “epistemic hygiene”: just like you teach kids about washing hands and bodies, take off your shoes at the door, brush teeth and all that, we need to teach them how to navigate a world in which information went from scarce to over-abundant and thus they risk “knowledge diabetes” way more than they risk “knowledge starvation”.
But the dangers appear to be increasingly sophisticated and nuanced.
For example, here is a bunch of different dynamics at play that could compound with each other to offer some dangerous outcomes:
general breakdown of the “third space”
increase of agoraphobic tendencies due to pandemic lockdown and the subsequent normalization of remote work for knowledge workers
the global availability of near-zero marginal cost mobile communication networks
decades of very profitable innovation on personalization and recommendation and systems
unsupervised deep learning and silicon acceleration of neural network workloads leading to large language models
These feel to me like perfect recipe for personalized echo chambers that become the epistemic equivalent of hamster balls.
We no longer need organized religion to be the epistemic equivalent of opioid, computers can generate perfect “truth bubbles” in which everything you believe to be true is validated and makes you feel better and comfortable even if they have to lie to you to do it and could even get you killed.
Selling the ability for others to manipulate the consumption recommendation fed into your “truth bubble” is a sure way to make the whole enterprise highly profitable and thus industrial strength.
At which point, we might have to adjust the list of “weapons of mass destruction” to add “epistemic” to “nuclear, chemical and biological”.
Is there a way to stop the evolution of AI chatbots from “chatty yet fabulist search engines” to full on “QAnon-as-a-service” epistemic hamster balls?
IMHO, I don’t think so: there is no stopping this train. It’s too attractive, it’s too useful when used safely and it’s too appealing to profit seekers, authoritarians and elected politicians alike. There is no squeezing back this toothpaste back into its original tube.
I try to work on my children instead, trying to inoculate them with the antibodies to navigate a world in which the negative externalities of these tools will be out there, like traffic, pollution, processed foods and sugary drinks.
I try to teach them to challenge authority, including mine (which is delightfully recursive but makes for a far more difficult job as a parent). I try to teach them that truth and fairness are not “discovered” but they are “invented”, an emergent phenomenon of the interaction of an increasingly larger number of peoples with a chaotic and increasingly energetic environment. I try to teach that that “curiosity” is more important than “knowledge” and that “truth” is like a model that needs to be discarded and/or updated when it predictive powers degrade. I try to teach them the dangers of dogmas, biases and stereotypes, including my own.
But I also want to give them a solid emotional and support foundation to build upon, an “epistemic safety net” in which they are allowed to be knowledge acrobats and take risks knowing that when they will fail they fall safely and not to their deaths. A balancing act between divergent and convergent thinking.
It is tricky to teach these things to a 8yo girl and a 12yo boy. Tricky and frankly exhausting. Even more so when the official education system they participate in feels designed to maximize for compliance, homogeneity and labor productivity rather than individual potential maximization, epistemic cleanliness and societal resiliency.
And yet I see no better way and my daughter testing Google’s theory of magenta slime seems small and inconsequential but made me proud and filled my heart with hope for their future and the future of humanity.