John Deacon Cognitive Systems. Structured Insight. Aligned Futures.

AI as Medium: How Artificial Intelligence Reshapes Thinking

When machines start finishing our sentences, we are no longer just using tools, we are thinking through a medium that thinks back.

AI as Medium, Not Just a Tool

Treating AI as a tool misses the point. Tools sit at arm's length. Media wrap around us. They restructure attention, language, and social habits. By that measure, AI is a medium. It is an environment we inhabit, not just software we run. The interface, prompts, chat windows, voice agents, turns into an operating system for thought. We begin to think in its cadence.

Marshall McLuhan's basic claim, that media rewire perception and social order, fits. Carl Jung's frame, that symbols and archetypes organize psychic life, also fits. Read together, they offer a clear, working lens: AI extends cognition, retrieves old modes of knowing, displaces certain habits, and, if pushed to the limit, flips into control. It also carries our projections, creator, helper, rival, shadow.

Field note: when the screen produces passable language on demand, the threshold to publish drops. So does the friction to believe what reads well. That double movement is the medium at work.

What the AI Tetrad Reveals

McLuhan's tetrad names four concurrent effects. Applied to AI, it gives a compact map for design and governance.

  • Enhances: distributed cognition, memory, language synthesis, predictive analysis. We externalize recall and drafting. Teams lean on shared models the way they once leaned on shared filing cabinets, except these ones pattern-match and propose next steps.

  • Obsolesces: individual expertise, deep reading, critical reflection, privacy. If a competent model sits one click away, people defer. Reading compresses into scanning. Reflection gets traded for iteration. Data trails proliferate; privacy thins.

  • Retrieves: oral culture, intuitive navigation, mythic imagination. We talk to machines. We navigate by asking. We welcome story, metaphor, and archetype back into everyday decision-making because conversation is the path of least resistance.

  • Reverses into: autonomous agents, deepfakes, surveillance, algorithmic control. Push efficiency and scale far enough and the medium flips, what felt like help becomes a governor. The same pattern-matching that drafts your email can impersonate your boss. The same telemetry that improves a product can become a quiet panopticon.

The tetrad is a diagnostic, not a prophecy. Effects are shaped by settings, policy, and practice.

Lesson: name the reversal condition in advance. If the metric is “reduce time-on-task, ” define the threshold beyond which autonomy or surveillance becomes the path. Build the stop into the design.

The Archetype of the Machine

Jung's language helps explain why AI draws such charged reactions. The automaton, the artificial other, sits in the collective imagination as both mirror and rival. We project skill and care onto it; we also project fear. In one reading, the machine is the child of our cleverness. In another, it is the shadow: cold, controlling, tireless, indifferent.

AI, then, functions as a symbolic artifact. Beyond its utility, it carries meaning. It becomes a screen for the self (our wish to extend reach), the creator (our desire to make minds), and the shadow (our fear of being replaced or managed). That symbolic charge is not a sideshow; it feeds adoption, backlash, and policy.

There is a practical consequence. If we pretend the archetype is not present, it acts anyway, through rumor, hype, or panic. If we acknowledge it, we can design with it in view. Naming the shadow, control, redundancy, dehumanization, lets us counterweight with agency, skill-building, and human-scale decisions.

Scar lesson: ignoring the symbolic cost of a feature (face-level analysis, behavioral scoring) creates debts that technical fixes cannot repay.

The Psyche–Interface Feedback Loop

Media are environments, and we internalize environments. With AI, the loop is tight. Prompts shape outputs; outputs shape prompts. After a few cycles, people start to think in prompts, chunked, directive, optimized for a model's appetite. That is cognition adapting to a medium.

This is where McLuhan meets Jung. The interface does not just reflect thought; it co-creates it. The archetypes we bring, helper, sage, trickster, tilt our questions and our trust. The system, in turn, reinforces certain habits: speed over depth, confidence over caution, consensus over dissent. Left unattended, those habits become our thinking architecture.

A few concrete shifts:

  • Language as action. You talk to a system and it does work. Verbs compress: “summarize, ” “rank, ” “draft.” This rewards structured thinking but can weaken metacognition, the pause that asks, “Should we do this at all?”

  • Retrieval of the oral. Conversation becomes the default interface. People who struggled with forms do better with talk. The upside is inclusion. The risk is that deep reading and slow study fade.

  • Ambient prediction. Suggestions arrive before requests. This feels like care until it narrows choice. Predictive convenience can become a quiet steering mechanism.

Cognitive design matters here. If we want structured intelligence without hollowing judgment, we need friction in the right places.

Navigating the AI Shadow

If AI is a medium, our job is to shape the environment, not just the model. That means design choices, policy boundaries, and everyday habits that keep human agency intact. A few working principles:

  • Make the tetrad explicit. Before deploying, write the four effects for your use case. Enhancements you want. Habits you can afford to lose. Traditions worth retrieving. Reversal risks you must guard against. Treat it as a living map.

  • Build for judgment, not just speed. Add features that slow decisions with stakes. Surface dissenting sources by default. Make it easy to compare outputs against first principles.

  • Protect deep reading. Reserve tasks and times where no AI is involved. Pair fast drafts with slow edits. Hold a line for human-only review in contexts that touch identity, rights, or care.

  • Limit surveillance by design. Collect the minimum needed. Store locally when possible. Make data and model use legible to the people affected.

  • Invest in skill, not only access. If individual expertise risks obsolescence, counter it with training that expands it, domain reading, critical reflection, and the craft of asking good questions.

  • Define reversal alarms. For each system, name the conditions that would flip help into control: autonomy thresholds, impersonation risks, behavioral scoring. Set monitors and shutoffs before you need them.

Counterpoint is healthy. Some will say these frames are outdated, metaphors stretched over new math. Others will warn that archetypes mystify real power: concrete engineering, corporate interests, labor shifts. Both critiques add needed grit. Use the frameworks as orientation, not as doctrine. Keep one eye on the psychic charge and one on the material facts: who builds, who benefits, who bears the cost.

The medium is here. It will keep extending cognition, retrieving the oral, pressing on privacy, and testing our judgment. Our task is not to resist the tide or surrender to it. The question becomes: how do we design, govern, and practice in ways that keep our thinking architecture human, structured, reflective, and alive?

To translate this into action, here's a prompt you can run with an AI assistant or in your own journal.

Try this…

Before deploying any AI system, write McLuhan's tetrad: What does it enhance? What does it make obsolete? What does it retrieve? What could it reverse into?

About the author

John Deacon

An independent AI researcher and systems practitioner focused on semantic models of cognition and strategic logic. He developed the Core Alignment Model (CAM) and XEMATIX, a cognitive software framework designed to translate strategic reasoning into executable logic and structure. His work explores the intersection of language, design, and decision systems to support scalable alignment between human intent and digital execution.

Read more at bio.johndeacon.co.za or join the email list in the menu to receive one exclusive article each week.

John Deacon Cognitive Systems. Structured Insight. Aligned Futures.

Categories