Statistical Significance Calculator

Free online statistical significance calculator for A/B tests, surveys, and conversion experiments. Enter visitors and conversions for control and variant to instantly get the p-value, z-score, confidence interval, relative lift, and a clear significant or not significant verdict at 90%, 95%, or 99% confidence. Built for marketers, UX researchers, and product teams who need fast answers from their AB test data.

Quick Answer

Statistical significance is the probability that the difference between two groups is not due to random chance. A statistical significance calculator runs a two-proportion z-test on your conversion rates or preferences and returns a p-value. If p is below 0.05, the result is considered statistically significant at 95% confidence, meaning there is less than a 5% chance the gap you see is just noise.

Enter Values

Total number of people who saw version A

Number of people in group A who converted (signed up, clicked, preferred option A, etc.)

Total number of people who saw version B

Number of people in group B who converted

95% is the standard in marketing and product A/B testing

Result

Enter values above and click Calculate to see your result.

AI Assistant

Ask about this calculator

I can help you understand the statistical significance calculator formula, interpret your results, and answer follow-up questions.

Try asking

Formula

#
Core Formula
z=p^Bp^Ap^(1p^)(1nA+1nB)z = \frac{\hat{p}_B - \hat{p}_A}{\sqrt{\hat{p}(1 - \hat{p})\left(\frac{1}{n_A} + \frac{1}{n_B}\right)}}

How it works: This is a two-proportion z-test with pooled standard error. It compares the conversion rates of two groups (control and variant) to decide whether the observed difference is likely to reflect a real effect or could easily have happened by chance. The pooled proportion assumes the null hypothesis that both groups share the same underlying conversion rate.

Review and Methodology

Updated Apr 20, 2026

This calculator runs locally in your browser. Inputs are converted into the units required by the formula, and the result is paired with supporting references so you can verify the method before using it for planning or estimates.

Worked Example

Control: 1000 visitors, 100 conversions (10%)
Variant: 1000 visitors, 120 conversions (12%)
Confidence level: 95%
1Step 1: pA = 100 / 1000 = 0.100, pB = 120 / 1000 = 0.120
2Step 2: Pooled p = (100 + 120) / (1000 + 1000) = 0.110
3Step 3: Standard error = sqrt(0.110 * 0.890 * (1/1000 + 1/1000)) = 0.01399
4Step 4: z = (0.120 - 0.100) / 0.01399 = 1.43
5Step 5: Two-tailed p-value = approximately 0.153
6Step 6: Since p = 0.153 > 0.05, the difference is not statistically significant at the 95% level
Result: A 20% relative lift sounds impressive, but with these sample sizes it is not yet statistically significant. Keep the test running or collect more data before deciding.

What Is Statistical Significance and Why It Matters for A/B Tests

Statistical significance is the probability that the difference you see between two groups is not due to random chance. If Logo A gets 52% preference and Logo B gets 48% in a 100-person survey, that 4-point gap could easily flip if you ran the survey again. An online statistical significance calculator runs a two-proportion z-test on your numbers and returns a p-value, which is the probability of seeing a gap at least as large as yours if the two groups were actually identical.

  • The standard threshold is p less than 0.05, meaning there is less than a 5% chance the observed difference is random noise
  • Small samples under 100 typically need a 10 to 15 percentage point gap before the result is significant at 95% confidence
  • Large samples with thousands of visitors can detect lifts as small as 1 to 2 percentage points
  • This tool works for A/B tests, conversion rate experiments, survey preferences, click-through rate comparisons, and any binary outcome
  • Also known as an AB test significance calculator, two-proportion z-test calculator, or conversion rate significance calculator
  • Significance does not equal importance: a 0.1% lift can be significant with enough traffic but still not worth shipping

Always set your confidence level and target sample size before starting a test. Stopping early the moment a test turns significant inflates your false-positive rate. For planning sample sizes up front, pair this calculator with our confidence interval calculator and z-test calculator.

You can also calculate changes using our P Value Calculator, Z-Test Calculator, Confidence Interval Calculator or T-Test Calculator.

Frequently Asked Questions

What does statistical significance mean in plain English?

It means the difference between two groups is unlikely to be caused by random chance. If an A/B test reports significance at 95% confidence, there is less than a 5% probability the observed difference would appear if the two versions were actually identical. It does not prove the variant is better, only that the gap is larger than noise.

How is this different from a regular p-value calculator?

A general p-value calculator expects you to already have a test statistic and degrees of freedom. This statistical significance calculator takes raw counts (visitors and conversions) and does the two-proportion z-test for you, which is what A/B testers, marketers, and UX researchers actually need.

What sample size do I need for a significant result?

It depends on the effect size you want to detect. To catch a 2 percentage point lift on a 10% baseline at 95% confidence, you typically need around 4,000 visitors per group. Small differences need large samples. Large differences can be detected with fewer.

Can I use this as an AB test significance calculator for ecommerce?

Yes. Enter visitors as the total traffic to each variant and conversions as the number of purchases, sign-ups, add-to-carts, or any binary success event. The tool works the same way for conversion rate tests, click-through rate tests, and email open rate tests.

Why is my test not significant even though B is clearly higher?

Usually because your sample size is too small. A 55% vs 45% split on 40 responses is not significant, but the same split on 400 responses is. Keep the test running until you hit your pre-planned sample size, or accept the lower confidence and move on.

Should I use 90%, 95%, or 99% confidence?

95% is the standard for marketing and product A/B testing. Use 99% for high-stakes decisions where a false positive is costly (pricing changes, legal copy). Use 90% for low-risk exploratory tests where you value speed over certainty.

Is this a one-tailed or two-tailed test?

Two-tailed. It tests whether the two conversion rates are different in either direction. Two-tailed is the safe default unless you committed to a directional hypothesis before collecting data.

Does the calculator assume the samples are independent?

Yes. Each visitor should be in exactly one group (A or B), and groups should be randomly assigned. If the same user appears in both groups, or if assignment is not random, the p-value will be misleading.

How do I add this Statistical Significance Calculator to my site?

Absolutely. Use the "Embed" option above to tailor the dimensions, color scheme, and styling to match your site. Copy the generated iframe snippet and drop it into your HTML, WordPress editor, or any CMS. There is no cost and no account required. See calculory.com/services/embed-calculators for a step-by-step walkthrough.

Accurate and Reliable

All calculations run locally. Trusted statistical analysis with step-by-step breakdowns.

Verified Precision

Precise Statistical Calculations Powered by Calculory AI