In a recent blog post I talked about what it’s like when you have what seems like unlimited testing resources and the level of success possible. I also showed how hard it is to have a big success without a careful strategy to avoid a testing program based on hippos and rats (and if you don’t remember what hippos and rats have to do with testing, give that last post another look).
In this post, we’ll talk about one of four key ways to take control of your program and deliver big wins: Having a good plan, and prioritizing it.
Build a roadmap by looking at your whole conversion funnel
When we work with our customers, we do a deep-dive analysis of their website to bubble up overall performance trends and visitor behavior.
We look at different data sources, including:
- Voice of the customer
- Competitor sites
Let me give you an real life example.
First, we took a look at their site metrics by device type. In this case, it was really important for planning their roadmap.
In this example above, desktop visitors make up a majority of overall traffic. They also have the highest reservation rate and the highest average order value in comparison to mobile and tablet devices. This data suggests the most immediate opportunity lies with the desktop visitors.
This is a key insight. If you’re anything like me, you read 10 stories a day about how mobile is taking over the world, think mobile first, desktop is dead, etc. And I don’t disagree that this is a general trend and often true. But it wasn’t true for this customer, and it might not be true for every one of your sites. So check before you accidentally ignore the most important part of your business in your testing plan!
Second, after learning that desktop needed to be an important part of the testing plan, we also looked at the top actions users took by device, and found that while reservations were always the most important, different users had different needs after that. Mobile (and tablet) users were more likely to be on-the-go business travelers who needed to log in, while desktop visitors were usually looking for deals.
So we made sure that our test plan considered designing specific tests for tablet login, desktop deals, etc.
Third, we took a look (see below) at where in the funnel visitors tend to drop off, and whether that is different by traffic source.
As you can see, visitors tend to exit the funnel between selection and upsell, and between upsell and payment, so it makes sense to emphasize those steps in the testing plan.
Additionally, we noticed that organic and referral traffic tends to exit the funnel in higher percentages than paid and direct search. We’ve normalized these four traffic sources to 100 in step 1 here to protect the innocent, but if these traffic sources were very different in order value and cost, it would be worth factoring this into where to apply the most testing effort.
So, don’t just test based on what’s in vogue, or what someone in your business is keen on, or what your boss may suggest. Do a little digging in the data (more is better) and figure out where you can move the needle for your business!
Ok, so let’s assume you’ve put together a plan that’s focused on delivering the most leverage for your organization. What next?
Prioritize testing ideas
In many of the organizations we work with, there are no shortage of testing ideas that make sense for the business. Sometimes hundreds. How do you pick and choose what test to run today, tomorrow, next week, next month?
To help our customers with this challenge, we developed a proprietary prioritization algorithm called SELECT. What SELECT does is prioritizes test execution based on several factors including:
- Length to run
- Custom (eg customer survey score)
- And more
Moreover, because every customer has different needs and different stakeholders, we made SELECT 100% customizable so customers can adjust the weighting to place more or less emphasis on any factors they choose.
Using SELECT allows customers to compare all their test ideas on an apples to apples basis, and make smart choices about what they test and when.
One popular view from running SELECT is the overall index score measured against revenue potential. As you can see in the example further down in this post, you can compare a large number of potential tests in two ways:
- Looking horizontally in order to identify tests of similar priority for your business and prioritize those with the greatest revenue potential (e.g. Ancillaries 1 is better than Advanced Search 2, which is better than Loyalty Registration)
- Looking vertically in order to identify tests with similar revenue potential and prioritize those that have a better overall score (e.g. Payment Page 1 is better than Payment Page 2)
In general, tests toward the top right should receive a higher priority than those in the bottom left.
Another popular view we use from SELECT plots complexity against success potential.
If you’re just getting started in building internal support for your program, you might prioritize quick wins vs tests that also have high potential but require a lot effort invested. Where revenue potential is larger (bubble size), the effort required may be justified, if you can get the support.
Finally, while prioritization is really valuable for figuring out how to order a long list of potential ideas, it’s also a powerful tool to push back on stakeholders like HIPPOs or other departments who bring new ideas to the table without much vetting or concern for how those ideas will slow down or displace work already on the plan.
If you aren’t an Optimost customer today and don’t have access to SELECT, I’d encourage you to come up with your own prioritization system. Estimate things like effort and potential value. A basic system is a lot better than no system at all, and will keep RAT syndrome away!
In another blog post soon, I’ll cover the second big step you can take to delivering outsize results, starting every test with the best possible hypotheses.