Learnings with A/B/n Testing as a Product Manager

Within the present era of on-going product development and optimization, you can never really consider your product the best. It could be the best today but will it be forever? The increasing competition and changing customer needs keep the companies always iterating and testing new ideas to make their product stand out in the market. If you don’t grow your product, you lose market share, which in turn impacts business growth. There is really never a full stop to improvement until you take a back seat or pivot.

As a Product Manager focused on growth and user experience, I have learned three things that stood most to me while experimenting with the new product ideas. Shipping fast is important but knowing how to make your test variant work out is actually the key to success. I will love to share my key learnings and tips for success with experimentation. They could be helpful to anyone — PM, PMM, Marketing folks, aspiring PMs, who are new to AB testing or are interested to learn more. In the next section, you will see the three learnings and a chart that puts my thoughts into a continuous cycle.

Firstly, keep your hypothesis simple and targeted on what user problem you are trying to solve. The clearer and focused your hypothesis is, the better would be the results. Results will always (like 90%) not be as you expected, but if you knew what you were trying to solve, you will be able to break down the results data and gather useful insights. The key is to assess the results and learn so you have both hypothesis and validation to plan your next steps to iterate. Moreover, directional data starts telling you to start thinking about v-2 instead of waiting for the test to be called out.

Secondly, keep clarity on your target metrics. It’s good to focus on more than one KPI depending on the product and changes in the flow, but you should know what’s primary vs secondary and tertiary, etc. If you didn’t prioritize your metrics before planning and starting the test, the results might confuse you and infer your decision-making. You might get inclined towards launching something which is working well on secondary KPI while it’s actually losing on primary one, and overall that might impact product growth. Moreover, it’s recommended to set some significance thresholds to decide if something is really making an impact — positive or negative — or results are skewed because of fewer participants or other relevant factors.

Talking about other interfering factors — never consider your test variant results final until you have done a sanity check on other possible factors that could be deviating your results. There could be many for example — another test running in a similar product area and targeting the same user group at the same time, participant count distribution between the test variants, quality assurance of the variant experience on the production site, and other specific ones depending on the situation. If you really don’t find anything buggy, then it has always helped me to accept it didn’t work! And it’s fine until you have built some learnings on why it didn’t work and started planning out v2.

Lastly, minimize the number of variables within one variant. When you are testing multiple things on the same page or on different pages, try to minimize the untested variables within one variant. Maybe running multiple variations against control is a better approach than bundling variables together into one. Sometimes, we don’t realize but a small change like a CTA label change could impact the conversion or click-through rate significantly. Having minimized variables helps in synthesizing the results and finding reasons for success or failure, and you are able to differentiate between what works and what doesn’t.

I hope you found the reading useful and enjoy experimenting, learning, iterating, and growing — I love it! I would also love to hear tips from your experience and any other feedback to make me keep learning and improving.