One of the best moments as a product manager is watching a feature you’ve built start to gain traction in the market. Whether you measure success in unique downloads, clicks or signed contracts, there’s nothing quite like the excitement that comes with knowing that users love your product.
This was part of the motivation for creating Wizeline: we wanted to help companies everywhere build products that their customers love. And we believed that, with the right methodology, it’s possible to make wiser product decisions by applying user engagement data in a structured, automated fashion.
Which brings me to the picture below, which I took in a hotel elevator in Mexico City. (I’m known for my breathtaking travel photography.)
Being the type whose mind wanders while being conveyed up and down, I started wondering about these two buttons. You can see that the “close” button has been pressed a good deal, its black paint wearing off at a faster rate than that of the “open” button.
After several rides I wondered, “Does this mean it’s a more successful feature?”
The evidence suggests so. An engineer or product manager could be forgiven for recommending a bigger “close” button be included in elevator 2.0.
But then I got to thinking: there’s another method for achieving the same outcome as the “open” button — namely by jutting your arm (or some other appendage) into the closing doors.
Lesson #1: When assessing the success of a particular feature, you must consider each feature within the context of the entire product. Just because one feature is under-used doesn’t necessarily indicate that users don’t find its function useless — they simply could be achieving their desired outcome via alternate means.
Needless to say, I was feeling pretty good about myself. I had turned a few otherwise unproductive elevator rides into entire blog post…
But then on my fifth or sixth ride, something new occurred to me. Compared to most elevators, this particular lift was perceptibly slower — not so much that you’d notice on a ride or two. But after several trips up and down the building, it would become clear that that the doors were taking 1-2 seconds longer to start closing.Should I revise my original user hypothesis? Were patrons simply responding to a slow user experience and impatiently pushing the close button to get on with their ride?
This was my second observation: Just as a metric showing heavy usage could be indicative of a feature’s success, it can just as easily be symptomatic of a flawed product. An easy example is average time spent on page. My high numbers mean I’ve created an awesomely sticky user experience, right?!
If you’re looking only at time spent on page, this is the natural conclusion. But your high engagement numbers might correspond to a spike in latency or page-loading times. Users may just be sitting around, waiting for your app to load.
Lesson #2: To get an accurate picture of reality, look at multiple key performance indicators (KPIs) in conjunction.
Clearly, this was the most productive time I’d ever spent on an elevator. But it wouldn’t be a blog post with just three lessons. I needed a third lesson to round things out…
Lesson #3: Ensure your product analysis is repeatable and — most important — make it happen automatically. Failing to do so will result in ad-hoc, backward-looking analysis that doesn’t yield consistently useful insights.
Anyway, you’ve made it through my somewhat random musings about elevator buttons and feature engagement. If you wanna learn more about how we’re working to solve these and similar challenges, give us a shout.