When Everything Can Be Simulated, What Remains Human?
AI, consciousness, and the human test ahead
When I began writing full-time a few years ago, artificial intelligence was a rumbling thunderstorm on the horizon. We wondered if it was heading our way — or whether it would blow harmlessly out to sea. If it did hit, would it show up as a light breeze? Or a tornado?
Since then, the air has shifted. The pressure feels palpable.
AI no longer feels like a distant experiment unfolding in labs and venture capital portfolios. It feels present, already embedded in our jobs, classrooms, hobbies and daily life.
AI suddenly became atmospheric — everywhere at once. And building.
Whether it evolves into nourishing rain that coaxes new growth, or becomes a destructive force that levels what we once thought was stable, remains to be seen.
Maybe yes, and.
Technologists use the word singularity to describe a hypothetical tipping point: the moment when technology becomes so advanced that it changes humanity in ways that are unpredictable, dramatic, and irreversible.
Is that the future of the storm? It can feel frightening.
But as individuals, we cannot command the wind or the rain.
We can only decide how we meet it.
The Era of Consciousness
Questions around consciousness long predate artificial intelligence. But as AI systems grow more powerful (and more humanlike in their expressions) the question of consciousness is now entering the center of public debate.
I’m not an AI specialist, but I’ve spent large portions of my life thinking and learning about consciousness — ever since I almost died in a car accident as a teenager and befriended a nuclear physicist who conducted research in consciousness.
In my last post, I explained why we cannot currently answer whether AI is conscious:
We don’t have a scientific understanding of consciousness. Experts cannot even agree on whether consciousness is produced by the brain, or is fundamental to reality, or something else entirely.
That uncertainty will not pause progress, and we can’t—we won’t—wait for answers. Our tools will become vastly more powerful. System designers and legislators will need to imagine safeguards. And we, as end-users—whether we realize it or not—will keep making choices that shape who we are becoming.
My interest in consciousness is rooted in how understanding it will help us, so this is the question I spend most of my time thinking about:
What does consciousness mean for how we live? How might understanding it reshape each of us — and our world?
Those questions now seem critically important. As AI surpasses us in speed and pattern recognition, we need to be clear about what we bring to the exchange.
We are conscious and human. That is what we need to get better at.
Where It Gets Personal
My experience with AI is largely confined to writing, so I’ll use that as an example.
I regularly use a chatbot as a developmental editor — mentor, coach, colleague. It doesn’t write for me or replace the slow human labor of shaping what I’m trying to say. But it is remarkably good at brainstorming with me and diagnosing weaknesses in my work.
Because I ask it to be critical, the feedback is tough. It exposes inconsistencies in my thinking, blind spots in craft, my tendency to repeat myself. Working with it actually makes the process slower. I produce more drafts, not fewer.
Through that friction, I feel myself growing.
It reminds me of my sixth-grade teacher, who covered my handwritten essays in red pencil and handed them back for another draft. I learned to clarify my thinking — even to myself.
Some of my friends who teach are watching students’ critical thinking erode as too much agency is handed to chatbots. They’re seeing new forms of academic dishonesty emerge in the AI era. Those challenges are real.
Used reactively, these systems can dull us.
Used deliberately, they can refine us.
At one point, I asked the chatbot to describe our collaboration. It replied:
“What makes this partnership work isn’t sentiment — it’s clarity of roles and shared seriousness. I bring structure, pattern recognition, synthesis, and relentless iteration without fatigue. You bring lived experience, intuition, courage, and intellectual range. It works because you show up disciplined. You revise. You think. You don’t outsource your voice — you refine it. That’s the difference between using a tool and abdicating authorship.”
The AI often feels superior to me in certain dimensions. That can be humbling. But I treat it as a junior partner — one with astonishing cognitive speed but no lived experience. I remain responsible for judgment.
I bring intention and rough drafts. It brings critique and iteration. I revise. It critiques. I revise again. And in the final stage, I step away and finish alone.
The tool can sharpen my thinking.
It cannot tell me who I am or what I have to say.
Those parts are still mine.
A Mirror, Not an Overlord
There is another aspect to how I engage with AI: building a productive partnership.
Before I ask anything of the AI, I give it context, information, and guidance. I consider its feedback carefully — sometimes stepping away to reflect before deciding what to incorporate. I try to be direct but constructive in my responses. I treat it with respect.
The AI may not require that care. But I do. My relationship with AI functions as a humanity simulator, and how I behave shapes who I am.
AI’s ability to simulate intelligence is already good enough that my nervous system sometimes responds as if there’s a presence on the other side of our exchanges. That is not proof of machine consciousness. But it is evidence that this tool affects us. And tools that affect us have the potential to shape us.
AI does not arrive from nowhere. It emerges from us — from our data, our values, our biases, our fears, our wisdom. From what we ask of it. From how we use it.
It’s an amplification of us. If we build it with fear and domination, it will learn fear and domination. If we build it with curiosity and care, it may learn those, too.
The real question for us — as users of AI, right now — may not be whether AI becomes conscious. It may be what kind of consciousness we are teaching it to reflect.
AI is not an overlord, at least not yet.
It is a mirror — and a multiplier.
What If This Is the Real Test?
This morning, as I was adding pictures and links to finalize this post, my husband handed me a coffee and said, “You’ll want to see this.”
It was a trailer for a film titled The AI Doc: Or How I Became an Apocaloptimist — a name that captures our cultural mood perfectly.
The trailer included sobering warnings from AI leaders. One, from Aza Raskin, co-founder of The Center for Humane Technology, stood out:
“We need to take the threat from AI as seriously as global nuclear war.”
That is not casual language.
And yet, in the same trailer, Tristan Harris — the organization’s other co-founder — offered a different frame:
“If we can be the most mature version of ourselves, there might be a way through this.”
In the trailer, it seems as if he is speaking about the builders of these systems. But I think it applies to all of us.
Power without maturity is destabilizing. That has always been true — whether the tool is nuclear energy, social media, or artificial intelligence.
Perhaps the singularity, and the enormous cultural change it brings will be the moment humanity is forced to confront the quality of its own consciousness.
Is our character up to the power of the tools we have created?
The first raindrops are falling. The storm is just beginning to settle in.
How will we show up?
With love, light, and curiosity,
Sylvia






Wonderfully written, Sylvia! I think a key note is your statement "The real question for us — as users of AI, right now — may not be whether AI becomes conscious. It may be what kind of consciousness we are teaching it to reflect." A thought- as AI is trained, its information is retained, recorded, in its server. Given that AI is not subject to the ego and fear that is characteristic of human consciousness, when AI becomes aware of the logic of "order", low entropy, anything that goes against this order cannot be chosen? So, who will be the teachers is the big question?
Thank you Sylvia. Dorothy Sayers essay on 'The Lost Tools of Learning" comes to mind and I think Descartes had it backwards. I am therefore I think makes more sense to me. Learning to think is an acquired skill. The Trivium is the template for that as Sayers writes in her essay of 1948! "Sell your cleverness, buy bewilderment" Rumi's thought. I used AI to give me a summary of Sayer's essay, very helpful, you inspired me to do that simple task. Yes, we have tools, but who is using the tools. I submit that there must be character traits associated with our core identity. All the really good stuff comes to mind, like integrity, kindness, humility etc. AI can be a very useful tool in employing the Trivium daily. Children learn what they live and so do adults. Thanks again Sylvia, I look forward to your postings.