TriggerNote Documentation
TriggerNote Documentation
twitter youtube linkedin google-plus

How to Set Up a Split Test

  1. Ensure that Google Analytics is in use on your webpage. You can either use the standard Google Analytics Javascript code, or create a Trigger Set containing the "Google Analytics: Load" Action.
  2. Create two or more Trigger Sets to test against each other.
    • In each Trigger Set, include a "Google Analytics: Send Event" Action to count impressions.
      • If the Trigger Set displays its content when the page loads (eg. in a floating bar), put it in an OnLoad Trigger.
      • If the Trigger Set displays its content in response to some Trigger (eg. a click, scroll, exit intent, etc.), put it in the list of actions under that Trigger. Set it to execute only once.
      • We recommend setting the "Category" for the event to "TriggerNote", and the "Action" to something like "trigger", "impression", etc. The "Label" should identify the Trigger Set.
    • Use another event to track conversions.
      • If the conversion action leaves the visitor on the same page, use another "Google Analytics: Send Event" Action when the desired action is taken.
      • To record clicks that take the visitor to another page, use the "Google Analytics Click Tracker" Recipe.
      • We recommend setting the "Category" for the event to "TriggerNote", and the "Action" to something like "click", "conversion", etc. The "Label" should be the same as for the impression counting event.
  3. Create a Selector Set to load each of the Trigger Sets you created.
    • Use the same Selector Group and Selector Priority for each alternative.
    • If you wish to load the split test manually into specific pages rather than using Selectors, do not specify any Selectors.
    • If you want to run the split test on every page that auto-loads Trigger Sets, use the "Always select" Selector.
    • Otherwise, use other Selectors as needed.
    • Either way, be sure to click the "Add Selector" button (even with the "Always select" Selector).
  4. Create a Split Test with the same Selector Group and Selector Priority as the Selector Sets you created.
  5. Enter a name for a cookie that will be used to ensure that each person sees the same alternative each time (if they see different alternatives, your test results will be less valid, because their actions may be influenced by things they saw on a prior page view).
  6. If, in step 1, you used the "Google Analytics: Load" Action to load Google Anaytics, be sure the Trigger Set you created in that step is checked at the bottom of the form.
  7. Click "Save Split Test".
  8. In the webpages where you are running the split test, if you wish to load the test using Selectors, be sure to include the code "TriggerNoteAutoSelect();". If you didn't specify any Selector criteria, load the split test manually using code like "TriggerNoteUseSplitTest(3);", where the number is the number of the Split Test.

How to Analyze Split Test Results

Why You Need Statistical Analysis of Your Split Test Results

If you search online for information about how long to run split tests, you'll find a variety of recommendations. Some people say to run it till you have at least 100 conversions. Others say 300, 400, or 1000 conversions.

They're all wrong.

The two main factors that determine how long you should run a test are:

  1. Your business cycle.
  2. Statistical significance.

Your Business Cycle

Your business cycle is the length of time needed to ensure that your test results aren't skewed by, for example, only running your test during the week, and not on the weekend.

Seasonality and other factors may be involved too. Unless you're aware of other such factors, you'll generally get the best results if you run your tests for at least a week, and end them on that same day of the week that you started.

Statistical Significance

Where most of the recommendations you'll find fail is in picking a specific minimum number of conversions. The reason people do this is because it's easy to understand and easy to do -- easier than understanding and performing a proper statistical analysis.

The reason why you can't simply pick a specific number of conversions is that the number of conversions required varies depending on how much better the winning alternative is than the loser. If there's only a 2% difference, you need a lot more data than if there's a 20% difference.

Technically, it's impossible to know with 100% certainty that your test results are valid. What is possible is to have high confidence in your results.

What statistical analysis does for you is to calculate a confidence level that the version that's currently winning your test really is better than the other version.

Statisticians recommend running tests until you have at least 95% confidence -- and you'd do well to listen to them. I've personally seen tests where the confidence level got up into the high 80s, or maybe even the low 90s, only to turn the other direction as more data came in. Don't give in to the temptation to quit too soon.

On the other hand, if you've run the test for a long time, and the confidence level isn't getting to 95%, that may mean that there's no difference between versions, and you should just pick one and end the test.