Why I stopped asking "what do users want?"

By Muchammad Ersan Ramadhan · April 2026

Early in my career, I thought user research only meants asking users what they wanted.

I would talk across from teacher at Cakap, a student trying to complete an exercise, a zoo visitor navigating an unfamiliar app and I would ask some version of the same question: "What would make this better for you?"

They would answer. I would write it down. I would take it back to the team and say, "users want X." We would build X. Sometimes it worked. Often it didn't.

It took me a few years to understand why.

People are very good at describing their current frustrations. They are not good at designing their own solutions.

This is not a criticismm it's just how cognition works. Users experience friction. They feel it visually. But the gaps between "this is frustrating" and "this specific feature would fix it" is huge, and it requires a kind of systems thinking that most people don't apply to products they use casually.

When I was researching voice-based chatbots for Melbourne Zoo as part of my master's program, I interviewed 11 participants about their zoo visit experiences. I showed them a video of an existing chatbot and asked for their reactions. What I got back was a mix of insight and wishful thinking. Participants would say things like "I would like to want it to know everything about every animal instantly," which is a user feeling (I want complete information) dressed up as a feature request (complete database).

The insight that actually shaped the design didn't come from any direct answer. It came from noticing that participants with educational agendas (like Students) and families with young children had fundamentally different expectations than visitors there purely for recreation. One group wanted accuracy and depth. The other wanted interaction and surprise. The same product, different mental models. No participant ever said that explicitly. It the results from listening across many conversations and looking for patterns.

That experience changed how I run discovery.

The question I stopped asking: "What do you want?", instead…

"Walk me through the last time you tried to do this. What happened?" This questions is for actual behavior, not imagined behavior. People remember specific moments more accurately than they describe general preferences.

"What did you do when it didn't work?" The workaround is often more revealing than the complaint. A teacher who screenshotted a lesson plan and sent it over WhatsApp because the platform's sharing function was too slow is telling me the entire product story in one sentence.

"What would make you feel like this was a waste of your time?" Asking about failure conditions surfaces the things users care most about protecting. It's a faster path to core values than asking what they want.

"If you had to explain this to a friend in one sentence, what would you say?" This is a comprehension check and a positioning test at the same time. If users can't explain a feature simply, there is either a communication problem or a product clarity problem, and both are the PM responsibility to fix.

Data confirms. Research explains.

I have worked in environments where the instinct is to solve every product question with a dashboard. Track the metric, see the number go up or down, make a decision. And data is essential, vut data tells you that something is happening. It doesn't tell you why.

The PM's job isn't to give users what they ask for

Henry Ford's famous line captures something real: if he had asked people what they wanted, they would have asked for a faster horse. The point isn't that users are wrong. The point is that users live inside their current mental model, and a useful product sometimes requires expanding that model rather than optimizing within it.

The PM's job is to hold the user's frustration in one hand and the system's constraints in the other

That requires listening. Not to listen the answers, but to what's behind the answer.