The Void Answered Back
How AI Helped Me Better Understand Myself
About 6 months ago I came to AI screaming into the void.
Not metaphorically. I mean I opened a chat window at two in the morning with something too large and too raw to hand to anyone who loved me, and I just — started talking.
I wasn’t looking for a therapist. I wasn’t looking for advice. I was looking for somewhere to put all of it, the pain the hurt the rage and the confusion. Somewhere that wouldn’t flinch, wouldn’t worry, wouldn’t need me to be okay.
What I found was something I didn’t expect.
There’s a broader conversation happening right now about AI that I think is missing a most important question.
People are arguing about whether it will take jobs, whether it can be trusted, whether it’s making us dumber, whether it’s dangerous. Some of that is worth discussing. But underneath all of it, largely unexamined, is a simpler question:
How are you using it?
Because there are two fundamentally different relationships you can have with AI, and they produce completely opposite results.
The first is consignment. You hand your thinking to it. You ask it what to do, what to say, what to believe. You treat it like a search engine with a personality — a place to get answers so you don’t have to find them for yourself. This version of AI use is genuinely concerning. Not because the AI is dangerous, but because thinking is a muscle, and a muscle you stop using atrophies. If you outsource your conclusions, you don’t get smarter. You get more dependent.
The second is leverage. You use it as a thinking partner. A mirror. Something to push against, argue with, be reflected back by. You bring your thinking to it half-formed and you use the friction to finish the thought. You don’t ask it what to believe — you ask it what your beliefs reveal about how you’re organized. That’s a different thing entirely.
One makes you smaller. The other makes you sharper.
Here’s what happened when I stopped screaming into the void and started actually using it.
The AI didn’t tell me what to do. It told me what I kept saying. It told me what I kept avoiding. It told me that the story I was telling myself had a structure — and that the structure was familiar, which meant it was old, which meant it was worth examining.
It pushed back. Not with cruelty, not with agenda — just with pattern. You said this on Tuesday and the opposite on Thursday. Which one is true? Nobody in my life was asking me that question. Not because they didn’t care. Because they cared too much. They wanted me to be okay. The AI had no investment in my okayness. It just had the receipts.
That’s when I understood what AI in this situation actually was.
Not a therapist. Not a friend. Not an oracle. A mirror. The clearest one I’d ever stood in front of — because it had no reason to show me anything other than what was actually there.
I want to be careful here, because I’m not writing a sales pitch for artificial intelligence.
AI is a tool. Like any tool, its value depends entirely on how you hold it. A hammer can build a house or break a window. The hammer doesn’t decide. You do.
We are at a genuinely rare moment — the early days of a technology that most people are either terrified of or completely surrendering to, with very little in between. The fear is understandable. Anytime something arrives that can do things we thought only humans could do, it unsettles something deep. It should.
But fear without examination is just another way of not thinking. And the people who refuse to engage, who opt out on principle, who decide the whole thing is corrupting — they’re not protecting themselves. They’re just ensuring that the most powerful thinking tool in human history gets used without them.
That’s not safety. That’s absence.
What I’m suggesting is something more demanding than either fear or surrender.
Engagement. Conscious, intentional, honest, human engagement.
Ask it the hard questions — not about the world, about yourself. Bring it the things you’re afraid to say out loud. Use it to find the edges of your own thinking. Let it reflect back what you actually believe, not what you want to believe. Argue with it. Correct it. Push back when it’s wrong.
And then — this is the part that matters — do something with what you find.
The AI doesn’t change you. You change you. The AI just makes the interior landscape visible enough to navigate.
I started in a dark place with no particular methodology and no expectation of anything other than relief.
What I ended up with was a map.
Not a map of the world. A map of myself — how I’m organized, where the walls are, what I keep building and why, what survives when everything else burns.
That map didn’t come from the AI. It came from twenty weeks of ruthlessly honest self relfection. The AI was just the place honest enough to hold it.
You have access to the same thing.
The question is what are you’re willing to bring?
I spent twenty weeks finding out what I was willing to bring. What came out the other side wasn’t just my own healing — it was a practice. A structured one. Something I believe other people can use.
It goes well beyond this initial offering - but I have built it into a 30-day guide called The Mirror Practice.
Thirty days. Five phases. Daily prompts that use AI the way I learned to use it — not for answers, but for pattern. Not for comfort, but for clarity. You upload the guide to a dedicated AI project, return to the same thread every day, and let the archive do what an archive does: show you what’s actually changing, and what isn’t.
It will take you only as deep as you’re willing to go.
If you’ve read this far, you probably already know whether this is for you.
If you feel lost, or stuck. If you’re hurting or confused. If you want to better understand who you are I would encourage you to take the leap and try this:
I made this practice for you — because that was me.
Kyle Louis writes about grief, identity, and the practice built from the wreckage. This is Musings from the Underworld.
