What Happens When your User Study Throws You a Curve Ball?
We’ve recently run usability studies on an application for a financial product where, after a user inputs a couple of screens worth of data, a tailored product solution appears. We had a nice prototype, and all was well except for one thing
During the usability study, we received the following feedback: the recommendation arrived too fast. Wait, what? Yes, the results arrived too quickly after the user hit send, which made people think that it wasn’t real. Some of the user feedback was “there’s no way it could’ve looked at my data in such short amount of time”, “I don’t think this is real, because it hasn’t spent any time thinking about it”, “I wouldn’t trust this product, it made me fill in screens of data but then ignored it all when giving me the recommendation”, or, in some cases, people were expecting to continue filling in more data and didn’t entirely believe that the screen they were looking at was the solution and not just another screen requiring more information.
So, in the next iteration, we added a page full of little loading spinners. They took their time loading while giving the user some insight into what was happening in the background. Users were happy that the application was giving their answers a serious look and all was well with the world.
Even though the app took longer to give people what they want, it resulted in a smoother, more pleasant user experience and increased trust in the results provided.
In our roles, as creators and users of digital tools, we spend considerable time and effort optimising everything - we want pages to load faster, apps to respond quicker, we want productivity increased overall. The last thing any of us want is to look at yet another dreaded progress bar, spinner or, gasp, another hourglass. 99% of the time we strive to avoid these during the design and development of an application. So why was this situation so different? Why did people insist on wanting something slower?
One aspect of the problem is the perceived complexity = perceived value. ‘Good things come to those who wait’ and if it takes longer, then it must be a more serious task carrying more value. If I, as the user spent a precious 10 minute chunk of my time filling out the information, then the system should take its time and look at it properly. Anything else would mean that the thing is insulting me and my time investment by not even looking at every little detail I just typed in. Right? Seems logical enough if we think about it.
Another aspect has to do with pre-existing mental models. In this case, it’s the traditional process of applying for a financial product. This typically happens as follows: I walk into a bank, I fill out a large stack of papers, they take it and go away to debate whether or not I am worthy of the loan or credit card in question. It’s a time-consuming process that requires a lot of investment from me, the user in hope of a high reward.
Even though I’m not physically walking into a bank, I carry the same mental model into the online transaction as well. I still need to answer a considerable amount of questions, and I still need to fill in a bunch of pages of data. So why would it be different on the institution’s side? If they are too snappy with the outcome it means to me that they are just trying to force me into one of their pre-existing solutions instead of coming up with one tailored to suit my specific needs.
This said, I’m not recommending that everyone should go and add some extra loading time to their application just to increase perceived value and user satisfaction. In this particular case, it worked, and it worked because of the reasons mentioned above. It lead us to a pleasant UX solution that matched people’s expectations more closely.
Aside from the why, it’s also crucial to note how we came to these findings. No person in their right mind would’ve said “I like apps that take time to think before giving me results” if we had asked them directly. We needed to test, and test with people who aren’t us.
We, as specialists are in the “must-optimise-everything” mindset, while our users aren’t necessarily. Sure we would’ve eventually noticed our metrics not performing as well as they should after launch. But at a much, much higher cost.
The most cost-effective and reliable way to uncover what these expectations and pre-existing mental models might be was to test, and re-test those ideas with actual people.