The Question I Can’t Stop Thinking About
Why am I doing what
When I moved to the US, something shifted that I didn’t expect. I started noticing things. Small, constant things. The way a direct “no” lands differently in English than in Korean. The way American friendliness reads as superficial to someone raised in a culture where closeness is earned slowly. The way I instinctively soften a disagreement with a senior, while my American colleagues walk into the same conversation like equals.
None of this was new to me. I’d lived in both worlds my whole adult life. What was new was realizing that AI had no idea any of this existed.
I asked a model what to do when my boss asked me to work the weekend. It said: “Set your boundaries. You have prior commitments.” Perfectly reasonable advice, if you’re in SF. In a Korean company, that sentence could end your career trajectory. Not because it’s wrong, but because the behavior doesn’t fit the world it’s landing in.
That’s the question I can’t stop thinking about: who decides what “helpful” means?
I’ve been asking this question longer than I realized
For eight years, I was a growth PM in Korea. B2C platforms, 35M of users. I was good at it. J-curves, Series funding rounds, the metrics that make investors smile.
But growth leaves marks. You convert users, optimize for aha moments, push retention, cross-activate for AMPU. Somewhere in that machinery, you start to wonder: are we helping people, or are we moving them? Did the user choose this, or did we choose for them?
I started to believe that real growth comes from the product’s own value. If users need to be tricked into staying, the product isn’t good enough. The best thing you can build is something people benefit from without having to study how it works.
That belief pulled me out of the growth loop and eventually to AI. And at Scale AI, I saw the same question wearing different clothes.
The gap no one was measuring
At Scale, I evaluated over 45 models. Different companies, architectures, training data. Same patterns. Models fixated on keywords instead of context. Fabricated evidence when uncertain. Agreed with whatever the user said.
The industry was in a capability race. Smarter, faster, more tokens. But nobody was asking: when this model actually sits across from a person, is the conversation any good?
I’d spent years asking “are we actually helping users or just moving them?” in fintech. Now I was watching AI do the same thing at a much larger scale, optimizing for benchmarks that don’t measure what matters to the person on the other side.
Two outsiders at once
Here’s the thing about living between cultures: you become an outsider everywhere. In the US, I notice what Americans take for granted. The assumption that directness is honesty, that individual autonomy is the default. In Korea, I now notice what I used to take for granted. The unspoken weight of hierarchy, the way silence carries meaning.
Models are trained in one culture and shipped to the world. They don’t know what I know from standing in both places. They can’t feel the awkwardness I feel every day. The gap between how the system thinks you should act and how your world actually works.
That gap is what I’m trying to close. That’s Kairos, my research project designing how AI models should behave across cultures. Not translating words, but calibrating behavior. So that “helpful” actually means helpful, wherever you are.
This is my story of many posts. If these questions resonate, I’d love to hear from you.

