Experiment to validate risky assumptions
When in doubt, test it!
For about a year, I got to work on an incredibly challenging and fascinating product—an au pair and host family matching platform. I jumped into the product team at a transitional time for the program. The company was overhauling its legacy online platform, which host families and au pairs used to match with each other. When I began, I split my time between the marketing team (website UX/UI design) and the product team (both incremental UX updates to the old platform and new features for the overhaul platform). Towards the end, I took on the unofficial role of Product UX Researcher (with a side of design) to fill a crucial gap I saw in the product process.
- - User research
- - Rapid prototyping
- - User testing
- - Workshop facilitation
Collaborating across disciplines
When faced with particularily meaty problems, I facilitated
several design thinking workshops with stakeholders from all parts
of the business. I even went to Switzerland to involve the
overseas offices as well (with an international product, I needed
their input too). For many months, the most impactful work I did
was research, rather than what I was hired to do—UI design.
As more and more of my colleagues, none of whom worked in product, approached me with ideas to test, or research topics I should look into, I realized that I had sparked somewhat of a research revolution in the company—well, I like to call it that. We prioritized the ideas or “experiments” to test out based on the outcomes of the workshops.
A closer look at my research process
- Start with research goals or assumptions to validate or disprove. Work with the company to come up with the most widely held assumptions to research. Unfocused research, while helpful for a basic foundation, might miss the mark on actually solving the problem.
- Gather participants (5–10 per round). I often worked with customer facing staff to recruit the exact demographic I needed. In the past, for new products, I would use my own networks or sites like UserTesting to find participants.
- Prepare a folder for documentation. Set up a Dropbox, Drive, or wiki folder to track participant tests in a spreadsheet, keep interview recordings, and take notes. Make it widely available and well-labeled so that anyone in the organization can find it in the future.
- Write a quick interview/testing script. This includes an introduction to set the context, pre-test questions, in-test tasks, and post-test questions (or just interview questions if there is nothing to test). Know that you can stray from the script if needed.
- Conduct the interviews and don’t forget to record them/take notes. It’s preferable to have someone else take notes so you can focus on the interview. Revisit your goals / assumptions after every interview and update the testing script as needed.
- Synthesize the interviews and look for patterns and themes. Summarize those themes and relate them back to each assumption or goal, and figure out whether or not your previous assumptions are valid, invalid, or require further exploration.
- Share, share, share! Invite colleagues from different disciplines to engage in the research results. I enjoyed presenting in a keynote presentation and including sound bytes from the interviews to emphasize the results.
Rapid prototyping, rapid validation
I worked together with several stakeholders to prioritize three
ideas to test each week. I created rough prototypes using Sketch
and InVision. I checked in with the developers about the
feasibility at a high level, so I wouldn’t present anything to
stakeholders that couldn’t be done. For a few months, my life was
a sea of neverending user tests. Each week, I presented my
findings to senior management, and we discussed how to best move
It was both frustrating and humbling to continue working off the legacy platform to mock up the prototypes for the many experiments we did. Since the overhaul system was still so far from MVP-ready, any emergency change would have to be done on the old system, even though it was clearly not ideal. However, knowing I couldn’t really change the UI or architecture of the system allowed me to be laser-focused on solving specific problems.
Small wins and big strides
Through this method of rapid research sprints, we were able to
make several changes to address the many pain points of the
process. We didn’t solve every single problem, but we made an
impact on both sides (au pair and host family). The au pair
experience is more complicated than I ever would have guessed from
I learned so much about planning and executing research, communicating with a wide group of stakeholders, and using research to make impactful (and occasionally risky) business decisions. At the very least, I’m so happy to have encouraged an excitement about user research in my colleagues.
Want to learn even more about my research process? Check out this article I published recently.