I find A/B testing to be a misdirection much of the time.
Testing is good, but A/B testing is problematic.
Testing is good, but A/B testing is problematic.
For a good A/B test, you need a good control group and a test group.
Because you likely have so much else going on, it's hard to get a control group (a set of users for which nothing else changed) for long enough to run an effective test.
Because you likely have so much else going on, it's hard to get a control group (a set of users for which nothing else changed) for long enough to run an effective test.
You also need a clear path or CTA to actually test.
It's really hard to attribute meaningful metrics to a new design upstream, like sign ups or conversions, as there would have been so many other factors along the way that would have influenced the flow.
It's really hard to attribute meaningful metrics to a new design upstream, like sign ups or conversions, as there would have been so many other factors along the way that would have influenced the flow.
For an A/B test, you need sheer numbers, ie. *traffic*.
It's easy to get basic results from an A/B test, but really hard to get 'statistical significance'. What often ends up happening is A will beat B by 55/45% and then, B will beat A by 52/48%, and it'll flip back and forth.
It's easy to get basic results from an A/B test, but really hard to get 'statistical significance'. What often ends up happening is A will beat B by 55/45% and then, B will beat A by 52/48%, and it'll flip back and forth.
The two designs aren't different or impactful enough to really make a difference.
Perhaps if you ran it on a large enough audience or managed to keep a control group long enough to achieve statistical significance, but it's usually not worth while.
Perhaps if you ran it on a large enough audience or managed to keep a control group long enough to achieve statistical significance, but it's usually not worth while.
A/B testing is popularized by the tech behemoths, like Amazon or http://Booking.com , where their goal is to optimize specific flows connected to revenue (usually shopping cart/checkout flows) and have TONS of traffic.
Their habits have overflowed to smaller tech companies who want the same sort of results, but don't have the same reasons to run such tests, nor the resources.
A/B testing is often used as a way to solve disagreements among teams, but it's time-consuming and costly! https://twitter.com/davatron5000/status/979383471452942336
For most companies, you're better off with qualitative tests, or simply just to iterate fast and measure what's working and what's not
