Explore our Multichannel lifecycle marketing Toolkit

18 questions to ask if you have started a conversion optimisation programme

Author's avatar By Expert commentator 21 Sep, 2016
Essential Essential topic

Get the right strategy in place before you start testing

Starting and running a conversion programme is all about asking the right questions. The questions you want to ask of the tests themselves, but also the questions you need to ask yourself and your team to get the most out of the programme.

Here are our 18 chronological questions to ask yourself and your team if you have recently started a conversion optimisation programme.

Before you test

1. Are a broad range of specialisms feeding into the optimisation programme?

The most effective optimisation programmes need input from strategists, user-centred designers, front end developers, copywriters and analysts.

2. What sources are you using to feed into the ideation process?

Typically the highest quality tests (i.e. the ones that will deliver the biggest uplifts and/or provide the greatest insight into the business) make use of a number of the following techniques/sources: persuasion/emotion/trust (PET), heuristics, user research (remote or moderated) and surveys.

User testing

3. Do you speak to departments other than your own to find out what they know about your customers?

Think beyond your immediate team; departments like customer services/support can provide brilliant, first-hand insight into the challenges your customers face.

4. Are your hypotheses highly evidenced?

Each hypothesis should ideally have both qualitative and quantitative sources that support it; for example, something that’s been observed in both user research and analytics.

5. Do you follow a set methodology?

To define the methodology for your testing programme, begin from the ideation process and decide which forms of qualitative and quantitative research will be most useful for you, then ensure you have a set process from ideation through to designing, building, running and analysing tests.

6. Are you using customer research to feed into the testing process?

Ultimately, it’s the customer who you want to influence with your findings so you need to ensure insights around their on-site behaviour are fed into the process. This can either be ‘what’ customers are doing (usually observed via data collection, or usability research) or it can be ‘why’ they’re doing it (insights that can be gained through in-depth moderated user testing).

7. Have you run workshops with multiple business stakeholders to come up with ideas for website improvements?

Running workshops can be the best way to come up with a big bank of ideas to test, and inviting stakeholders from around the business to participate means that you’ll get a variety of perspectives on the right things to try.

8. How do you prioritise which tests to do first?

It’s a good idea to define a set process for prioritising tests to help you logically decide which you are going to tackle first; it can be as simple as weighing up effort (i.e. to build) versus potential (likelihood that the change will deliver an uplift).

9. Do your hypotheses follow a thorough format?

Formulating a standardised way of formatting your hypotheses makes them more useable (i.e. because we observed X we believe that changing Y will result in Z, this is true when we see XX).

10. Does your business see value in lots of tests with small changes or fewer tests that drive bigger results/uplifts?

It’s important to measure the correct metrics for success when testing. It’s easy to churn out lots of small tests based on weak hypotheses but that’s unlikely to deliver you much of a return. But if you put more work in at the beginning of the process to establish what you need to measure and why, it ensures your outputs are of a high quality and deliver more valuable results.

The testing process

11. Do you have a thorough plan for each test that describes in detail how each experiment is to be run, with wireframes or designs if appropriate?

Having a test plan that details the hypotheses, the evidence that has led to the test, designs and instructions for implementation ensures that it will be properly set up and that the results will be the best they can be. It also means that it’s straightforward for anyone to review progress. Plus, the analysis and results can also be added onto this framework once the test has concluded so it acts as a succinct summary document which can be referred back to in the future.

12. Do you follow a set quality assurance/user acceptance testing process for each test to ensure there are no problems?

Having a defined quality assurance (QA) and user acceptance testing (UAT) process means that you’re far less likely to experience issues once the test is up and running. You should also always ensure the main browsers and devices that website visitors use to access your site (this information can be found in Google Analytics) have been tested to mitigate risk.

13. Do you know when to call a test ‘completed’?

A test should run for at least two business cycles and also reach 95% statistical significance/a desired sample size before it’s brought to a close.

Analysis

14. Do you learn from each test, even if it hasn’t delivered an uplift?

Even if a test shows a negative result or is inconclusive you can draw some sort of understanding about your users that can inform the next steps. It’s helpful to spend some time trying to determine why it didn’t deliver an uplift; maybe the messaging could be improved or the placement was incorrect.

15. What do you do with the learnings from each test? Are they housed in a knowledge base, and do they inform further testing?

Once you have analysed and written up the results, you can use them to create further testing opportunities. It could be that you create a follow up test, for example, improving the wording you originally used, or maybe you saw an uplift from a persuasive message and feel it will work well on another area of the site. You can then keep everything you learn in a central base, which will mean that new people joining the team can review and interpret the work that has been done and that, should the team change in the future, the findings won’t be lost.

16. Do you take learnings from your testing and use them to inform other marketing channels (online and offline)?

Don’t forget that you can apply what you learn through online testing to other forms of your marketing to create a seamless experience on and offline.

17. How do you report the increase in revenue back into your business?

This is the best way to demonstrate that your tests are actually working! Ideally, you should present these as a monthly dashboard of tests and what the uplifts mean in terms of annualised incremental revenue.

18. What percentage of your tests have delivered an uplift?

Industry standard is 3/3/3 rule: a third of tests win, a third of tests are inconclusive and a third of tests don’t deliver an uplift. If you follow all of the above, and ensure they you are testing hypotheses that are well structured and highly evidenced then you may be able to achieve a 50% test success rate – in which case you are doing a fantastic job!

Still not sure where to start?

Take this conversion optimization quiz to see how savvy your CO programme is.

Author's avatar

By Expert commentator

This is a post we've invited from a digital marketing specialist who has agreed to share their expertise, opinions and case studies. Their details are given at the end of the article.

This blog post has been tagged with:

Testing

Recommended Blog Posts