John Deacon Cognitive Systems. Structured Insight. Aligned Futures.

Prevent identity distortion in AI human cognitive symbiosis

When AI surfaces patterns from your digital exhaust, you're left with a choice: let the machine define what matters, or use its signals to build something unmistakably yours.

Clarify the Shared Work

Let's start where the friction lives, what the machine sees and what you feel aren't the same job. Treat AI as a cognitive prosthetic that spots resonant patterns while you build the story's inner architecture. A memoirist ingests five years of voice notes into a model and gets clusters like “water at night, ” “father's watch, ” and timestamps around hospital visits. She turns those raw signals into a scene where a sink won't drain, her father's watch ticks too loudly, and the ER clock is missing its hands, language as interface for memory, not a transcript of data.

When you keep roles clean, cognitive alignment follows: the system amplifies attention; you decide meaning through coherence. With that division in place, you can lean into the uncanny edges without losing the center.

Invite the Digital Gothic

With roles set, you can step into the corridor where data ghosts flicker without sensationalizing the tech. Use the digital gothic as mood, not melodrama. A protagonist opens an “On this day” prompt and finds MySpace photos from 2007: a haircut she forgot, a friend she ghosted, a caption that reads braver than she felt. The AI doesn't invent horror; it time-stitches her past into now, and the eeriness comes from a thought-identity loop catching up with her body.

A second scene grounds the theme: a shared calendar resurfaces recurring Wednesday dinners with an ex; the pattern shows tenderness and vanishing acts in equal measure. The haunting isn't spectral, it's the push and pull between a saved self and a becoming self.

“The machine doesn't create ghosts. It just shows you where you've been haunting yourself.”

Turn Pattern Into Care

Once the haunting is named, you need a contained process so pattern doesn't become pressure. Think of therapeutic co-creation as two phases: the model surfaces subconscious echoes; you interpret and integrate. Here's how to keep the loop safe and honest:

1) Bound the archive: choose a time window and topics you're willing to surface. 2) Let the model return motifs and contradictions, not judgments. 3) Write a human reflection per motif, one paragraph of context, one of feeling. 4) Decide what not to keep; pruning is part of self-awareness.

A grounded example: a character feeds her chat history and journal into the system and receives four motifs, “apologies after midnight, ” “unfinished drafts, ” “quiet Sundays, ” “messages not sent.” She writes about the shame behind the apologies, then about the calm of quiet Sundays, and deletes “messages not sent” because it belongs to someone else's privacy boundary. Those notes become material you can stage on the page.

Layer Machine and Self

When you've metabolized the material, the next move is to show the collaboration in your form. Build a meta-narrative where AI analysis and human reflection sit side by side without competing voices. In a web essay, the right margin quietly displays the model's distilled lines, “motif probability: 0.67; time clusters: winter; affect: ambivalent”, while the main body carries the first-person scene of salt on boots and a bus that never came. Readers feel the split: signal on the side, story at the center; meaning through coherence, not through metrics.

Print can do it too. A novel places “Model Notes” as endnotes every third chapter, never more than two lines, so the rhythm stays human. The machine's voice is sparse and functional; the narrator's voice is embodied language, breath, temperature, the scrape of a key in the lock.

Protect Agency As Design

With form established, turn to guardrails so augmentation doesn't become substitution. Address autonomy directly in the story and in your process. A character sets a rule: no AI memory prompts after 9 p.m., and no resurfacing content without date and source. In one scene she declines a suggested “Year in Review” reel and writes her own version from three smells, chlorine, eucalyptus, burnt toast, regaining authorship of recall.

“This is cognitive symbiosis, not ventriloquism.”

Ethically, keep the uncanny valley in view. If the model mimics a loved one's tone too closely, mark it as synthetic in the text or disable that feature in your world. If it flattens nuance, narrow the prompt until the tool returns to pattern, not persona. The point isn't anti-tech; it's pro-agency within the system.

Carry these limits into your next draft, and you'll keep the inner architect, your living myth of self, building with the machine, not inside it. AI can surface patterns; you turn them into meaning. The difference between those two jobs is where your identity lives.

Here's a thought…

Feed one week of your digital activity into AI and ask for three recurring motifs. Write one paragraph of human context for each pattern the machine finds.

About the author

John Deacon

Independent AI research and systems practitioner focused on semantic models of cognition and strategic logic. He developed the Core Alignment Model (CAM) and XEMATIX, a cognitive software framework designed to translate strategic reasoning into executable logic and structure. His work explores the intersection of language, design, and decision systems to support scalable alignment between human intent and digital execution.

Read more at bio.johndeacon.co.za or join the email list in the menu to receive one exclusive article each week.

John Deacon Cognitive Systems. Structured Insight. Aligned Futures.

Categories