A/B testing is an optimization process that easily can get trapped in suboptimal results. 1/6 https://twitter.com/greglinden/status/1334997236225171458
For example, if you measure advertising success as clicks on ads and ad revenue, you can win A/B tests by adding more ads to the page. After several of these A/B tests, your site will be filled with ads, causing people to leave in frustration. 2/6
Many news sites experiment with adding clickbait news ("You won't believe what happens next!"), which wins A/B tests for most engagement metrics, but long-term repels readers looking for actual news. 3/6
In both cases, run out that optimization too long and you'll end up filtering your users to the point that you only have people left who are willing to tolerate your crappy site for whatever reason, everyone else long gone, your user base and business dwindling. 4/6
A/B testing is an incremental process. It struggles on non-incremental changes like major interface changes, new designs, new products, or removing scams. Novelty and learning effects kick in, often over long time periods, and it's hard to get the metrics right for these. 5/6
Removing disinformation should be treated by websites as a long-term investment. Clickbait and scams gone, some of your users will engage less (the clickbait is gone after all). But you're investing in people who left you because of your crap coming back to you eventually. 6/6