Experiment to validate risky assumptions

When in doubt, test it!

For about a year, I got to work on an incredibly challenging and fascinating product—an au pair and host family matching platform. I jumped into the product team at a transitional time for the program. The company was overhauling its legacy online platform, which host families and au pairs used to match with each other. When I began, I split my time between the marketing team (website UX/UI design) and the product team (both incremental UX updates to the old platform and new features for the overhaul platform). Towards the end, I took on the unofficial role of Product UX Researcher (with a side of design) to fill a crucial gap I saw in the product process.


  • - User research
  • - Rapid prototyping
  • - User testing
  • - Workshop facilitation
Some quick au pair personas I did early on, based off of interviews with au pairs and au pair recruitment staff. These didn’t directly inform the experiments I mention below, but helped build my foundational knowledge of au pairs.

Collaborating across disciplines

When faced with particularily meaty problems, I facilitated several design thinking workshops with stakeholders from all parts of the business. I even went to Switzerland to involve the overseas offices as well (with an international product, I needed their input too). For many months, the most impactful work I did was research, rather than what I was hired to do—UI design.

As more and more of my colleagues, none of whom worked in product, approached me with ideas to test, or research topics I should look into, I realized that I had sparked somewhat of a research revolution in the company—well, I like to call it that. We prioritized the ideas or “experiments” to test out based on the outcomes of the workshops.

collaboration collaboration
Action shots from a couple of the many ideation sessions we ran to crack the toughest problems
  1. A closer look at my research process

  2. Start with research goals or assumptions to validate or disprove. Work with the company to come up with the most widely held assumptions to research. Unfocused research, while helpful for a basic foundation, might miss the mark on actually solving the problem.
  3. Gather participants (5–10 per round). I often worked with customer facing staff to recruit the exact demographic I needed. In the past, for new products, I would use my own networks or sites like UserTesting to find participants.
  4. Prepare a folder for documentation. Set up a Dropbox, Drive, or wiki folder to track participant tests in a spreadsheet, keep interview recordings, and take notes. Make it widely available and well-labeled so that anyone in the organization can find it in the future.
  5. Write a quick interview/testing script. This includes an introduction to set the context, pre-test questions, in-test tasks, and post-test questions (or just interview questions if there is nothing to test). Know that you can stray from the script if needed.
  6. Conduct the interviews and don’t forget to record them/take notes. It’s preferable to have someone else take notes so you can focus on the interview. Revisit your goals / assumptions after every interview and update the testing script as needed.
  7. Synthesize the interviews and look for patterns and themes. Summarize those themes and relate them back to each assumption or goal, and figure out whether or not your previous assumptions are valid, invalid, or require further exploration.
  8. Share, share, share! Invite colleagues from different disciplines to engage in the research results. I enjoyed presenting in a keynote presentation and including sound bytes from the interviews to emphasize the results.

Rapid prototyping, rapid validation

I worked together with several stakeholders to prioritize three ideas to test each week. I created rough prototypes using Sketch and InVision. I checked in with the developers about the feasibility at a high level, so I wouldn’t present anything to stakeholders that couldn’t be done. For a few months, my life was a sea of neverending user tests. Each week, I presented my findings to senior management, and we discussed how to best move forward.

It was both frustrating and humbling to continue working off the legacy platform to mock up the prototypes for the many experiments we did. Since the overhaul system was still so far from MVP-ready, any emergency change would have to be done on the old system, even though it was clearly not ideal. However, knowing I couldn’t really change the UI or architecture of the system allowed me to be laser-focused on solving specific problems.

I can't show any actual product screenshots, but here's a glimpse into an early stage in my prototyping process.

Small wins and big strides

Through this method of rapid research sprints, we were able to make several changes to address the many pain points of the process. We didn’t solve every single problem, but we made an impact on both sides (au pair and host family). The au pair experience is more complicated than I ever would have guessed from the outside.

I learned so much about planning and executing research, communicating with a wide group of stakeholders, and using research to make impactful (and occasionally risky) business decisions. At the very least, I’m so happy to have encouraged an excitement about user research in my colleagues.

Want to learn even more about my research process? Check out this article I published recently.

Back to all projects