It’s easier now than ever before for businesses to run CRO experiments, but what happens when great technology is misused? From not running enough tests to manipulating data to prove a favored hypothesis, there are many ways businesses use testing resources incorrectly — and screw up their growth in the process. So, how can we Unf*ck these issues?
Join me, Siobhan Solberg, and my co-host Russell McAthy as we chat with Optimal Visit’s Optimiser in Chief, Craig Sullivan, about all things testing. We get into why there is no such thing as a ‘failed’ test, the importance of optimizing your experiment programs, and how A/B testing can help you grow better tomatoes (literally).
In this episode:
Many more businesses are starting to run tests, but few run enough to drive real impact.
The more you test before setting updates live, the fewer mistakes you’ll make.
There’s no such thing as a failed test. Tests that don’t prove your hypothesis are as valuable as ‘successful’ tests — and can actually tell you more.
Tests should provide strong evidence that changing from what you’re doing now is the right idea.
Hunches and leaps of faith have their merit but need to be augmented by data, not driven by assumptions.
What should start-ups focus on if they don’t have adequate sample sizes or KPI outcomes?
How can businesses blend qualitative and quantitative data to generate better quality ideas?
Tests should never be run until a clear and measurable hypothesis that’s rooted in critical thinking has been written.
Manipulating data to support your hypothesis and personal biases leads to flawed business decisions.
The only way to truly understand your audience is to talk to them and run tests.
Accurately and authentically representing your audience drives better results and can change the way they engage with your business.
How to optimize experimentation programs in order to scale.
Big decisions can’t be made based on one A/B test that fails to take into account wider contexts.
Marketers should not ignore the importance of segmenting mobile and desktop users when running cross-device experiments.
Why governance and transparency are the largest things to be Unf*cked in testing.