BinHong's Newsletter - A/B Testing

Basic guide to running a good A/B test

This is a basic introduction on how to run a good A/B test. A/B testing is a method where your user pool is segmented into multiple groups, allowing you to test different product interactions and understand how these changes affect user behavior. For any metric / data driven team, A/B testing serves as a critical tool in measuring success.

This is part of a series (The Opinionated Engineer) where I share my strong opinions on engineering practices.

Goal + Guardrail metrics

“When a measure becomes a target, it ceases to be a good measure” - Goodhart’s Law

Goal metrics serve as a target, the specific thing you want to change or improve. While I largely agree with Goodhart’s Law, I also believe that a sufficiently comprehensive set of metrics will help mitigate the downside (or at least minimize it) thus making it still relatively “a good measure”. This is where guardrail metrics come in to ensure we aren’t just “sacrificing x to boost y” especially in an unsustainable manner.

As an example, if you want to increase usage on your app, you can provide generous discounts (or even pay users to do so) but that’s obviously very unsustainable so you should set a clear budget on how much you can spend on these promotions and / or cost per new active user acquired you allow for such a project to ship. Here your goal metric would be “new active users acquired” while your guardrail would be cost (lost revenue). They need to both look good from the experiment before you decide on shipping the product change.

. . .

Read the rest in the blog!

Unsubscribe