Before my time at BlueDot, the company had developed a consumer-facing application, aimed at targeting the general public about the spread and effect of infectious diseases, especially in regards to travel.
Before starting my sprint, I conducted market research and competitive analysis on similar applications in the market. Research done by a previous intern focused on self-diagnostic tools with a tinge of travel on the side. My main area of focus was travel and health, travel, and health.
From my research, I discovered that a variety of immunization coaches and travel guides exist. While health applications were popular with a very large user base, and travel helpers were also very popular, when combined, travel and health applications tended to fall short. Users seemed to be more comfortable with keeping travel guides separate from their health and immunization history.
I embarked on a sprint to gather user insights on what could be improved for the next iteration of the product. Using the Google Ventures Sprint process (documented in 2016 by Jake Knapp, Zeratsky, and Kowitz) as a guideline, I created a prototype of a mobile application, which I named Pip.
I was very idealistic in my sprint, in the sense that I held the belief that I could do something to change societal perception on infectious diseases by instilling reciprocal values and expectations. I felt that if users could see that the company cared about them, it would feel natural for them to care about the topic we were pushing, in return. The way I approached this concept was by looking at chatbots, because I felt that their dynamicism could translate to a more friendly approach. (Spoiler, this ultimately was not a success.)
Long term goal: to make users care about their health when they travel, the way we care about them.
According to Nielsen, 5 users is enough to test and develop user trends. At BlueDot, I recruited 7 testers who answered a pre-testing questionnaire, tested the application, and answered a set of behavioural questions to either reinforce/disprove the feasibility of my goal.
Because my main focus was on chatbots, questions and feedback were centred on this concept. All my users had heard of/used chatbots before, but none of them were frequent users. They also felt that an application trying to do everything was very untrustworthy (quantity vs. quality), and disliked the nontraditional input method, which was why many users had difficulty navigating the application. Lastly, the concept of a machine trying to be human was offputting to most.
To be blunt, I would probably consider this sprint a failure of an experiment. Not a failure as in a regret to have done it, but I failed in the validation of my personal hypothesis (I was too idealistic), as well as my inexperience in user research and testing.
However, there were some interesting takeaways from it. Unbeknownst to me, a previous intern at BlueDot had also tried to implement a chatbot for her design sprint, and I unconsciously followed the same path as her. However, with both of our prototypes unable to make a dent in our consumers, I realized that chatbots were not the way to introduce interactivity to our audience. I can’t go around expecting that “I” will be able to change the general public’s perspective on infectious diseases, so I felt almost foolish for thinking that I could. But it was a lesson learned and I’m glad I had the chance to figure this out via a weeklong sprint (the reasons why these sprints exist in the first place - the leeway and freedom to hypothesize and fail).
I later had a chance to conduct a second sprint where I implemented many of the things I learned from this experiment.