Your personal health and travel assistant.


My first major project during my internship at BlueDot involved leading and conducting a weeklong design sprint (GV Sprints) for their first consumer application. The company’s mission involves educating the general public about the spread and effect of infectious diseases, especially in the travel space. As of 2017, BlueDot’s first consumer application is currently in the market; to understand user needs and to attempt to hit that “sweet spot”, I embarked on a sprint to understand what users want and need.

Market Research

Before my sprint, I conducted market research and competitive analysis on similar applications in the market. Research done by a previous intern focused on self-diagnostic tools with a tinge of travel on the side. My main area of focus was travel and health, travel, and health.

From my research, I discovered that a variety of immunization coaches and travel guides exist. While health applications were popular with a very large user base, and travel helpers were also very popular, when combined, travel and health applications tended to fall short. Users seemed to be more comfortable with keeping travel guides separate from their health and immunization history.

Assignment Goal

Because of the very personal goal of the company, my personal goal for this sprint was to try to change users’ perceptions about health, from our standpoint.
"How can we make users care about their health when they travel, the way that we do?"

Google Ventures Sprint

In 2016, Jake Knapp, Zeratsky, and Kowitz (Google Ventures) wrote a book about their design sprint process, which they constantly implemented to solve and test ideas in just one week. I mainly followed the steps outlined in their book to create a prototype of a mobile application, named Pip.

  • Day 1: Start at the end and form a long term goal.

  • Day 2: Understand the problem and choose a target.

  • Day 3: Choose the solution.

  • Day 4: Prototype.

  • Day 5: Test.


Planning & Conceptualization

I was interested in developing a chatbot, because my personal hypothesis was fuelled by that of reciprocity of care and interactivity.

Mockups & Prototype

User Testing & Results

User Testing

According to Nielsen, 5 users is already enough to develop and understand user trends. At BlueDot, I managed to recruit 7 testers, who were asked to answer a pre-testing questionnaire, test the application, and answer a set of behavioural questions to either reinforce or disprove the feasibility of my personal goal. From their results, I was able to gain some insights regarding the attitudes towards health and travel and the mode of delivery.


Of the 7 users I tested, I had a plethora of mixed reactions. My users:

  • Were moderately knowledgeable of chatbots to suspicious and uncomfortable, yet none of my users were frequent users.

  • A third of my testers travelled more than 3 times per year.

  • My testers were fairly diverse, with no more than two people from each department tested.

General comments (positive and negative):

  • “I would rather use another application to check the weather (something more trustworthy)”

  • “I am not used to chatbots; I don’t trust them/seems gimmicky/more used to traditional input methods.”

  • “Conversational & friendly” vs. “machine trying to be human”.

  • Directives were not very clear – users were prone to still selecting “view x” options and search for manual entry options despite having played with the chat feature.


To be blunt, I would probably consider this sprint a failure of an experiment. Not a failure as in a regret to have done it, but I failed in the validation of my personal hypothesis (I was too idealistic), as well as my inexperience in user research and testing.

However, there were some interesting takeaways from it. Unbeknownst to me, a previous intern at BlueDot had also tried to implement a chatbot for her design sprint, and I unconsciously followed the same path as her. However, with both of our prototypes unable to make a dent in our consumers, I realized that chatbots were not the way to introduce interactivity to our audience. I can’t go around expecting that “I” will be able to change the general public’s perspective on infectious diseases, so I felt almost foolish for thinking that I could. But it was a lesson learned and I’m glad I had the chance to figure this out via a weeklong sprint (the reasons why these sprints exist in the first place - the leeway and freedom to hypothesize and fail).

I later had a chance to conduct a second sprint where I implemented a lot of the things I learned from this experiment.