How Will We Know It Worked?
Rethinking success metrics in the age of compassionate technology
Dear Curious Thread-Pullers,
In most product meetings, somewhere between the first post-it sketch and the second coffee refill, someone eventually asks the question:
“How will we know it worked?”
It’s meant to be a practical prompt, defining goals, clarifying outcomes. But in the realm of compassionate or human-centered technology, it’s also a moral question.
And often, it’s one we answer poorly.
Most of the standard metrics, clicks, conversions, retention, churn, were built for a different age. They measure engagement, growth, and profitability. But they rarely measure whether a user felt less alone. Whether someone who had never used an AI tool before found their way through the interface without fear. Whether a product welcomed them, or quietly pushed them away.
That last one matters more than we think.
According to research from RAND, one of the biggest contributors to AI project failure is “misunderstandings and miscommunications about the intent and purpose of the project.”
In other words, we don’t just lose users through bugs, we lose them through ambiguity. Through language that’s too specialized. Through interfaces that assume experience or literacy that not everyone has.
Sometimes, the truest success metric is simply:
Did they understand what the product did before we asked them to do anything?
Did they feel like they belonged here?
Because staying is a success metric.
So is feeling safe.
So is not needing a tutorial because the design already made sense.
In this way, accessibility becomes not just a compliance checkbox, but a source of success itself.
The more accessible something is, the more people can use it. The more people who use it, the more trust, benefit, and belonging it can create.
It’s not just about permanent or visible disabilities either, it’s about first-time users, temporary limitations, tech-wary elders, and people in rural or low-resource environments. If your product only works for the digitally fluent, then your product isn’t finished.
The Interaction Design Foundation defines Human-Centered Design as a methodology focused on “empathy, extensive user research, and iterative testing to ensure the final product benefits its end-users.”
But we want to go further. We want to show the field that compassion itself is a valid success metric, and that when done thoroughly, it’s not just morally right, it’s strategically wise.
Because here’s the secret no one at the product table wants to admit:
Almost no one measures who doesn’t return.
Almost no one asks why.
And almost no one says aloud that those disappearances were our fault.
There is always talk of performance metrics. Business value. Model efficiency. Operational throughput.
But harm is rarely measured.
Alienation is not charted.
Silence is not graphed.
Maybe it’s time to ask: what is the opposite of a success metric?
And why are we not counting it?
If we’re serious about human-centered tech, then we must mean more than a smiling dashboard post-launch.
We must mean follow-up. Stewardship. Care after first contact. A commitment to nurturing, not just building.
And perhaps, we need new language altogether.
Because words like “Success Metrics” and “Human-Centered” have grown tired from overuse and under-definition.
We don’t need bigger promises. We need better questions. And the courage to ask them before we pat ourselves on the back.
So let this be our first metric:
Did we do what we said we’d do,
in a way that left no one behind?
Let that be our measure. And if it isn’t met, let that be where we begin again.
With clarity and care,
Verity



