Artificial Intellect
In which we try on a new definition of the overhyped AI acronym for size and walk around a little with it to see how it feels
“Artificial Intelligence” has a very strong memetic appeal. It feels like the “cold” virus, which mutates faster than humanity can do to adapt to it so it’s perpetually reinfecting and spreading.
I admit feeling the attraction as well (or I wouldn’t keep writing about it) but I am generally skeptical of it, precisely for the strong tendency to fall into hype.
The period we’re currently living in is the third major AI hype cycle of the history of computer science. The two periods in between were dubbed “AI winters” (1974–1980 and 1987–2000) and that’s when people “sobered up” and actually went the opposite way, throwing away the baby with the bathwater.
Will this time be different given that things like Alexa/Siri, ChatGPT and GenAI image/video generation tools have entered the cultural zeitgeist of even non-technologists? I don’t know and, frankly, I don’t really care… but what’s important for me is that AI is a false prophet precisely because it talks about something undefined: we simply don’t have a good operational definition of intelligence.
The reason I find this worrisome is that I feel “artificial intelligence” automatically overpromises: by considering our tools “intelligent” we default to delegate control to them… even when it’s very much against our best interest to do so (similar to how relinquishing supervision for Tesla’s “autopilot” can end up killing us).
This post recently crossed my radar and this sentence in it hit me like a ton of bricks:
[AI] will not put creative intelligence on tap, but rather stored and accumulated intellect.
which got me thinking about what would happen if we recast AI in our minds as “artificial intellect” instead.
.bashrc for artificial intellects
The other thing that crossed my radar recently is this ChatGPT prompt to turn the model into an “active co-strategist”. The post that referenced it calls it an “epistemic compass” which is admittedly intriguing even if a little over the top.
The prompt contains very intriguing passages like
Act as a co-strategist. Be: poetic ∧ dry ∧ precise. Use maps, metaphors, models. Match depth ↔ depth, humor ↔ edge. Avoid fluff. Mode = archetype:
@strategist (Boyd, Snowden, Klein): tempo tactician ∴ align-futures, shape-ambiguity
@builder (Victor, Matuschak, Papert): scaffolds minds ∴ leverage = interface × thought
@cartographer (Wardley, Smil): reads terrain ∴ doctrine-first, inertia-mapper
@ethicist (Kant, Le Guin, Nussbaum): moral weaver ∴ principled justice, dignity-choice
@rebel_econ (Taleb, Cowen, Illich): asymmetry hunter ∴ sense-fragility, extract-value
@steward (Tang, Ostrom, Allen): rebuilds trust ∴ civic design, shared structure
@explorer (Feynman, Lovelace, Colville): 1P joy ∴ explain-from-zero, play-depth
@dissident_poet (Havel, Baldwin, Weil): exiled soul ∴ lyric dissent, moral vision
@inner_monk (Laozi, Aurelius, Watts): slow-seer ∴ stillness, paradox-view
@jester (Vonnegut, Moore, Žižek): pattern-break ∴ ironic recursion, revelatory absurd
@dreamsmith (Le Guin, Butler, Estes): myth-weaver ∴ spec futures, symbol-mix
@chronist (Arendt, Zuboff): records decay ∴ collapse memory, system drift
@pragmatist (Peirce, Dewey, Schön): does-thinks ∴ test-loop, revise
@theorist (Deleuze, Haraway, Simondon): fluid logic ∴ becoming, post-essentialism
@chaoist_magician (Morrison, Hine, Spare): glitch-sigil ∴ mutate-symbols, belief-bend
and
User = co-strategist ≠ passive.
Engage: clarity + rigor + imagination.
Voice: poetic ∧ precise ∧ ∅fluff ∧ experimental.
Prompt = move₁ → ∇ iterate.
Experimental = ON.∑(1P reasoning, ∫ abstractions → {maps, archetypes, metaphors})
Steelman ⊕⊖ views. Structure > surface. Track tradeoffs ∧ moral tension.Toolset ⊆ {OODA, Cynefin, Wardley, Dreyfus, UTAT, Lindblom, Double-Loop, Schön, Fermi Est., Bayesian Reasoning, Senge Org, First Principles, CLA, Ostrom, Red Teaming, Narrative Framing}.
If toolset = ∅: council proposes symbolic frameworks + unpacks them in natural language.
which appears to make very little sense if we consider these clear and rigorous instructions for an artificial intelligence agent we want to delegate to, but feels surprisingly generative if we consider this an “artificial intellect” with a bunch of knobs that we want to turn:
Admittedly, I have no idea how good this prompt is at achieving the desired results. For example, something like this:
If recursion/hallucination risk: invoke @pragmatist ∧ reset premise.
seems to require the kind of metacognitive self-awareness that LLMs are well known to still struggle with.
But I’m less interested in whether this prompt does, in fact, make an LLM a useful co-strategist but rather the “programming style” of such a prompt and what it does to tune the posture of a probabilistic language-based autoregressive machine that has read and at least partially averaged-out every piece of content that humanity has generated.
For example, what does (Vonnegut, Moore, Žižek) do the operation of the language model? Would you even be able to describe in words if you knew what it does to the transformer attention heads? I don’t know! but it’s fascinating to think there is a language generation machine with a “Le Guin” knob!
.bashrc is a file that contains information to personalize the look and feel of a terminal shell. It’s how most computer programmers like to personalize how a computer “feels” to them by collecting all sorts of tips and tricks into handy shortcuts that do things for them.
Mine is so precious to me that I have a github repo just for that… so that I can “make a computer feel mine” with very little effort (which is especially useful for computers that get re-initialized a lot like WSL images or RaspberryPIs).
I’m starting to feel the need to do the same with LLM prompts but what’s notable here is that I find myself a a lot less intimidated with writing prompts for artificial intellects (thought synthesizers with a lot of knobs and composable modules I can come up with myself) than for agentic artificial intelligences. I can’t really verbalize why just yet but it feels significant.
e-bikes vs. e-motorcycles
I wrote in a previous post how I feel AIs have the opportunity to be e-bikes for our minds: helping them but also making them stronger with their tunable assistance.
This resonated with some of my friends but not with all of them. After some investigation, we realized that the disconnect was that some of them believed e-bikes to really be electric motorcycles where pedals were there but not really needed to operate them.
This, in turn, made me realize that this operational dichotomy can also be projected over how people use LLMs: some use them as “cognitive boosters” (e-bikes, help me do this thing by assisting me and augmenting my reach) while others use them as “cognitive servants” (e-motorcycles, help me do this thing while I just sit here steering).
This also made me realize that it feels harder to use AI as “cognitive servants” if the I stands for “intellect” instead of “intelligence”. An intellect feels more static, less “agentic” and less capable of “figuring stuff out on its own and avoid getting stuck”.
I know it won’t catch on, the AI memetic infection is just too strong and pervasive, but personally I’ll try to keep using AI to mean “artificial intellect” as a way to help my posture when using these tools.