The goal of testing is to get information about the behavior of visitors to a website so that our customers can improve the user experience. We want to find out what visitors like and what they dislike. OpenText™ Optimost is an A/B and Multi-variable testing (MVT) platform that allows customers to continually improve their website’s Return on Investment (ROI) based on a simple idea – instead of making decisions solely based on your design-team’s intuition, why not leverage your visitors to help you make those decisions?
The genesis of A/B and MVT testing
Although best suited for the Internet, the use of A/B and MVT testing predates the internet and was used in many industries during the 20th century. Farmers have run tests varying fertilizer and water for decades. In the 1950s, scientists ran clinical trials in medicine to test new treatments, drugs, and devices. In the 1960s, companies began testing different content in direct-marketing campaigns. All these examples took considerable expense to set up and months or often years to reach statistical significance, making them very niche endeavors within our economic ecosystem.
With the advent of the internet, we suddenly had a greenfield for testing with a much lower cost and the ability to reach statistical significance in shorter periods of time. The traffic available for many websites meant that, unlike our 20th century examples, results could be arrived at in weeks, not months or years. With that in mind, the founders of Optimost began developing the first A/B and MVT tool in late 2001, and deployed the first experiments in 2003.
While initially hesitant, customers soon realized testing was an invaluable tool in increasing their ROI by allowing them to develop the best possible web presence. The era of the HIPPO – Highest Paid Person in the room – making all the important decisions gave way to a data-driven approach to decision-making through testing.
The hows and whys of testing
We have various ways we can test for information. We may test visitors as one heterogeneous group – what we call the All Visitors “Persona” – or we may separate the traffic into homogeneous groups – say different “Personas” based on geography, demographics, how they got to the website (whether it was organic search or they clicked on advertisement), mobile vs desktop, or some other classification.
When we test a page, we pick a Key Performance Indicator (KPI) as the success point and see how visitors that land on our test page “perform” compared to each other, based on that KPI. The KPI may differ based on what the customer considers to be success – it may be clicking on some offer, enrolling in a newsletter, or buying a product.
In running our experiments, a key component for accurate measurement is having confidence in our results. Confidence is generated by having enough visitors that perform that KPI – sufficient traffic to make those decisions. Before deploying a test, we get information from the customer about the amount of traffic that they expect to reach the KPI. This allows us to design our experiments with the correct number of creatives based on running the experiment for some period – ideally for at least 2 weeks.
The amount of traffic may also control the type of test you run. An A/B test with a control and 3 challenger values splits your traffic evenly across 4 versions of your page or “creatives”. If you decided to instead to test more areas in a page, say have 4 variables, each with a control and 3 challenger values, now you are looking at 4 x 4 x 4 x 4 or 256 different creatives, which requires a lot of traffic to test. Each test run on a Persona is called a “wave”, and will normally have 2 or more creatives.
For these larger MVT experiments, Optimost uses “Optimal Design” MVTs to reduce the size of experimental waves to give you a significant reduction in test size. In the case of a 4 variable, 4 value experiment, it can be reduced to a 32-creative wave, which requires 8 times less traffic than a 256-creative wave. This ability to efficiently test large experiments is a key selling point for MVTs.
While the amount of traffic may constrain the size of our experimental waves, we must also consider it when we decide to use Personas to test more homogenous groups. Not only do Personas reduce traffic going to each group, but often it’s unevenly distributed between groups. We do a lot of desktop/mobile testing of our customer’s website, and it’s often not a 50/50 split. Running waves on both personas and expecting them to end close in the same amount of time may require waves with a different number of creatives, or conversely running the same waves but waiting longer for the lower-traffic Persona to reach statistical significant results.
Once the wave is done, we analyze the results. For A/B and simpler MVTs, we declare a winner and will often run a “verification wave”, an extra wave to test the control vs the winner. For larger MVTs, it may not be possible to test all the variables in a single wave and other waves may be needed. Even after that, there is a chance that we never “ran” the winning creative – the variable level data will tell us that the best combination never ran. In these cases, we will run a “champions wave” that will run the projected winner – what we call a “super Creative” – along with other high-performing creatives.
Once the testing results are verified, our best practices entail that the customer update the test page to implement the winner. On some occasions, they will have to wait for a code push based on their product release cycle, so instead we will “serve the winner” in a 1-creative wave with just the winning creative. Optimost will serve as a sort of “temporary content management system” that gives the customer the ability to reap the benefits of the test results and not disturb their normal release schedules.
Start testing today
With testing done, what’s next? We believe that testing is not just about finding the pages that need to be improved, but as a philosophical principle integrated into your web ecosystem. The old paradigm of deciding on the “best” design and rolling it out is replaced with testing the best ideas, whether it’s a site-wide rollout or a how to improve the add to cart functionality, and letting your customers decide.
If you would like to increase you ROI and start testing today, or simply want to learn more, get in touch with us via the contact form on this page, we’d love to start a conversation with you.