This article applies to Nelio A/B testing versions prior to 5.0.
If you are looking for documentation for our newest version, please bookmark neliosoftware.com/testing/help/
A winning alternative in Nelio A/B Testing is the alternative which is most likely to perform better than the original version of your element under test (a page, a widget, a menu, ...).
Nelio A/B Testing uses the G-test statistic for computing the results of an experiment. The G-test statistic is a measure of how much overall variation there is from an ideal prediction that you would expect if all versions were the same.
At what confidence do we end the experiment? There is no hard rule, however a common benchmark is 95% confidence. By default, Nelio A/B Testing decides that there is a statistically significant winning alternative, when the confidence level we get from the G-test statistic reaches 95%. However, customers with a Professional Plan can modify this default option through the Advanced settings tab directly within Nelio A/B Testing plugin.