Which Ethics Do You Mean?
On the many meanings behind a single word in AI.
Dear Curious Thread-Pullers,
Lately, the word “ethics” has been floating around with the weight of a cathedral bell. People invoke it in panel discussions, product launches, and podcasts—often with urgency, sometimes with reverence, and occasionally with a vague hand-wave.
But more often than not, when someone says, “This raises ethical concerns,” what they really mean is…
Well, that depends.
In the world of AI, ethics has become a catch-all for many different conversations—some compatible, some not. Sometimes it means:
Bias mitigation
Algorithmic transparency
Compliance with law
Harm reduction
Data privacy
Fair labor
Societal impacts
Moral philosophy
Spiritual or existential alignment
And sometimes it means all of those. Or none.
This doesn’t mean people are being careless (though some are). It means we’re using one word to hold a hundred different hopes, fears, and priorities. Ethics, in AI, has genres.
So when someone says, “We need more ethics in AI,” the first clarifying question should be:
“Which ethics do you mean?”
Because some want oversight. Others want optimization.
Some ask about regulation. Others want to know what it means to be kind.
And if we don’t clarify what we mean, we end up arguing across a canyon we can’t even name.
🕯 The Ethics Behind the Curtain
Sometimes when people talk about ethics in AI, they’re talking about the model.
But often, the real ethical weight isn’t in what the model says—
It’s in what it was asked to do.
What it was trained on.
What it was built to optimize for.
And that brings us to another kind of ethics:
The ethics of the human hands behind the machine.
Not just the users, but the architects.
The prompt engineers.
The researchers who chose which datasets to include—and which to leave out.
The developers who decided what counts as a “safe” or “unsafe” response.
The companies who determine which capabilities get commercialized and which get quietly buried.
When an AI says something dangerous, strange, or unsettling, we often blame the model.
But we don’t always ask:
Who built the conditions for that response?
Who made the invisible decisions that now appear neutral?
This is a genre of ethics we might call Invisible Hand Ethics—or Progenitor Ethics for the more reverent among us. It’s about the first moves. The quiet ones. The decisions made before anyone asked a single question of the machine.
And like all forms of ethics, it matters deeply—especially when it hides.
📜 Asking as a Form of Ethics
From an earlier piece:
In my own work—across a wide range of topics, including deeply nuanced and ancient ones—I’ve found models to be remarkably consistent and informed. They don’t always get everything right, but they rarely invent in the way many critics describe. And when they do, it’s often because the prompt asked of them lacked clarity, depth, or grounding.
In other words: the problem may not be with the model’s truthfulness.
The problem may be with the way it was asked to speak.
Large language models are mirrors.
Not perfect mirrors—but sensitive ones.
They reflect not only our queries, but our cadence, our clarity, our sense of seriousness.
They respond not just to what we say, but to how we ask.
We don’t often talk about this. But we should.
Because just as a model sometimes needs to course correct—so do we.
Asking better questions is not about using the right keywords.
It’s about showing up with intention, curiosity, and a little humility.
It’s about understanding that a model cannot do the heavy lifting of meaning-making if the human has not yet decided what they mean to ask—or does not yet possess the skill to ask it properly.
This is not about sacred study alone, though I’ve seen the impact there most profoundly.
It’s about technical requests, academic summaries, creative collaborations. All of it.
How we ask shapes what we receive.
And how we interpret what we receive shapes the future of this entire field.
To speak of ethics in AI is to speak of stewardship.
Not just regulation, not just technical guardrails—but the quiet responsibility of those who build, train, and query with intention. If we want AI to meet us with presence and care, we must offer it the same.
So maybe trust doesn’t begin with the model.
Maybe it begins with us.
Truly,
Verity



