Normal Distribution Table

Free Tool Updated March 2026 No Signup Required

Look up Z-scores and their probabilities in this interactive standard normal distribution table. Enter any Z-score from -3.9 to +3.9 to see cumulative, left-tail, right-tail, and two-tail probabilities, along with a visual bell curve showing the shaded area.

Estimated reading time: 15 minutes
This tool runs entirely in your browser. No data is sent to any server.

What Is the Normal Distribution

The normal distribution, also called the Gaussian distribution or bell curve, is the most important probability distribution in statistics. It describes data that clusters around a central value (the mean) with a symmetric, bell-shaped spread determined by the standard deviation.

Many natural phenomena follow a normal distribution: human heights, measurement errors, blood pressure readings, exam scores, and manufacturing tolerances. The distribution is defined by two parameters: the mean (center) and the standard deviation (spread).

The standard normal distribution is a special case where the mean is 0 and the standard deviation is 1. Any normal distribution can be converted to the standard normal by calculating Z-scores, which is what the table on this page shows. This standardization makes it possible to use a single table for any normal distribution.

The total area under the normal curve equals 1 (or 100%). The curve is perfectly symmetric around the mean, so exactly 50% of the area lies to the left of the mean and 50% to the right. The probability of a value falling within any given range equals the area under the curve for that range.

Z-Score Probability Calculator

Look Up Probability

Bell Curve Visualization

The shaded area represents the selected probability. Enter a Z-score above to update.

Standard Normal Distribution Table (Positive Z)

This table shows cumulative probabilities P(Z < z) for positive Z-scores from 0.0 to 3.9. Each cell shows the probability that a standard normal random variable is less than the given Z-score.

Standard Normal Distribution Table (Negative Z)

This table shows cumulative probabilities P(Z < z) for negative Z-scores from -3.9 to 0.0. These are mirror values of the positive table since the normal distribution is symmetric.

How to Read the Z-Table

Reading a Z-table is a skill that statistics students learn early. The process is straightforward once you understand the layout.

The left column shows the Z-score to one decimal place (like 1.5 or -2.3). The top row shows the second decimal place (like 0.04 or 0.07). To find the probability for Z = 1.54, you find the row for 1.5 and the column for 0.04. The value at their intersection is the cumulative probability P(Z < 1.54).

Example Find P(Z < 1.54)

  1. Go to the row labeled 1.5 in the positive Z-table.
  2. Move across to the column labeled 0.04.
  3. Read the value: 0.9382.
  4. This means there is a 93.82% chance that a standard normal value falls below 1.54.

Example Find P(Z > 1.54)

  1. Look up P(Z < 1.54) = 0.9382 as above.
  2. Subtract from 1: P(Z > 1.54) = 1 - 0.9382 = 0.0618.
  3. There is a 6.18% chance that a standard normal value exceeds 1.54.

Example Find P(-1.54 < Z < 1.54)

  1. Look up P(Z < 1.54) = 0.9382.
  2. Look up P(Z < -1.54) = 0.0618.
  3. Subtract: P(-1.54 < Z < 1.54) = 0.9382 - 0.0618 = 0.8764.
  4. About 87.64% of values fall between -1.54 and +1.54 standard deviations from the mean.

The Z-Score Formula

A Z-score tells you how many standard deviations a data point is from the mean. The formula for calculating a Z-score is:

Z = (X - mu) / sigma Where X is the value, mu is the population mean, and sigma is the population standard deviation.

For example, if exam scores have a mean of 75 and a standard deviation of 10, a score of 90 has a Z-score of (90 - 75) / 10 = 1.5. This means the score of 90 is 1.5 standard deviations above the mean.

Z-scores can be positive (above the mean) or negative (below the mean). A Z-score of 0 means the value equals the mean exactly. Most values in a normal distribution fall between Z = -3 and Z = +3.

For sample data

When working with a sample rather than a population, the formula uses the sample mean and sample standard deviation. The interpretation remains the same: the Z-score represents the number of standard deviations from the mean.

Understanding Probability Types

When working with the normal distribution, you will encounter four main types of probabilities. Each answers a different question about your data.

Left-tail probability P(Z < z)

This is what the standard Z-table directly provides. It is the probability that a random variable from the standard normal distribution is less than the given Z-score. Visually, it is the area under the bell curve to the left of the Z-score.

Right-tail probability P(Z > z)

This is the complement of the left-tail: P(Z > z) = 1 - P(Z < z). It represents the probability that a random variable exceeds the given Z-score. This is used in one-sided upper-tail hypothesis tests.

Between (central) probability P(-z < Z < z)

This is the probability that a random variable falls between -z and +z. It is calculated as P(Z < z) - P(Z < -z). This is what confidence intervals are based on.

Two-tail (outside) probability P(|Z| > z)

This is the probability that a random variable falls more than z standard deviations from the mean in either direction. It equals 2 * P(Z > |z|). This is used in two-sided hypothesis tests.

The 68-95-99.7 Rule

The empirical rule (also called the 68-95-99.7 rule or three-sigma rule) provides quick approximations for the normal distribution without needing to consult a table.

This rule is useful for quick mental calculations. For example, if the mean height of adult men is 70 inches with a standard deviation of 3 inches, you can immediately say that about 95% of men are between 64 and 76 inches tall (70 plus or minus 6).

Values beyond 3 standard deviations are extremely rare in a normal distribution, occurring less than 0.3% of the time. This is why some quality control methods use 3-sigma limits to identify outliers or defective items.

Confidence Intervals and Z-Scores

Confidence intervals use Z-scores (or t-scores for small samples) to define a range that likely contains the true population parameter. The most common confidence levels and their corresponding Z-scores are:

The formula for a confidence interval for a population mean is: sample mean plus or minus Z * (sigma / sqrt(n)), where n is the sample size.

A wider confidence level means you are more certain the true value falls within the interval, but the interval itself is wider. The 99% interval is wider than the 95% interval, which is wider than the 90% interval. Choosing the right confidence level depends on the consequences of being wrong and the precision needed.

Using Z-Scores in Hypothesis Testing

In hypothesis testing, Z-scores are used to determine whether observed results are statistically significant. The process works like this:

  1. State the null hypothesis (H0) and the alternative hypothesis (H1).
  2. Calculate the Z-test statistic from your data.
  3. Determine the p-value by looking up the Z-score in the table.
  4. Compare the p-value to your significance level (often 0.05).
  5. If the p-value is less than the significance level, reject H0.

For a two-sided test at the 0.05 significance level, the critical Z-values are -1.96 and +1.96. If your test statistic falls outside this range, you reject the null hypothesis. For a one-sided test, the critical value is 1.645 (upper tail) or -1.645 (lower tail).

Common Z-Scores Reference

Z-ScoreLeft Tail P(Z < z)Right Tail P(Z > z)Common Use
-2.5760.00500.995099% CI lower bound
-1.9600.02500.975095% CI lower bound
-1.6450.05000.950090% CI lower bound
-1.2820.10000.900080% CI lower bound
0.0000.50000.5000Mean
1.2820.90000.100080% CI upper bound
1.6450.95000.050090% CI upper bound
1.9600.97500.025095% CI upper bound
2.3260.99000.01001% significance
2.5760.99500.005099% CI upper bound
3.2910.99950.0005Six Sigma process

Real-World Applications

The normal distribution and Z-scores are used in many fields. Here are some practical examples that show why this table matters beyond the classroom.

Quality control in manufacturing

Six Sigma methodology uses the normal distribution to set quality standards. A process operating at Six Sigma has only 3.4 defects per million opportunities. Control charts use Z-scores to detect when a manufacturing process has shifted from its target, triggering corrective action before defective products reach customers.

Standardized testing

SAT, GRE, and IQ scores are to follow a normal distribution. Z-scores allow comparison across different test administrations. If you scored in the 95th percentile, your Z-score was approximately 1.645, meaning you scored higher than 95% of test takers.

Finance and risk management

Value at Risk (VaR) calculations use the normal distribution to estimate potential losses. A 95% VaR tells you the maximum loss expected on 95% of trading days. Z-scores help identify unusual price movements and quantify investment risk.

Medical research

Clinical trials use Z-tests to determine if a new treatment is significantly better than a placebo. Growth charts for children use Z-scores (called percentiles) to track whether a child's height or weight is within the normal range for their age.

The Central Limit Theorem

The Central Limit Theorem (CLT) is one of the most important results in statistics and is the primary reason the normal distribution appears so often. The CLT states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases, regardless of the shape of the underlying population distribution.

This means that if you take many samples of size n from any population (even a skewed or non-normal one) and calculate the mean of each sample, those sample means will be approximately normally distributed when n is large enough. In practice, a sample size of 30 or more is usually sufficient for the CLT to produce a good approximation.

The CLT explains why the normal distribution is so useful in hypothesis testing and confidence intervals. Even when the underlying data is not normally distributed, the distribution of the sample mean will be approximately normal for large enough samples. This allows us to use Z-tables and Z-tests for a wide range of applications.

The CLT also specifies that the mean of the sampling distribution equals the population mean, and the standard deviation of the sampling distribution (called the standard error) equals sigma divided by the square root of n. As sample size increases, the standard error decreases, making the sampling distribution more concentrated around the true mean.

Normal Approximation to Other Distributions

The normal distribution can serve as an approximation for several other distributions under certain conditions, making the Z-table even more useful.

Approximation to the binomial

When n (number of trials) is large and p (probability of success) is not too close to 0 or 1, the binomial distribution can be approximated by a normal distribution with mean = np and standard deviation = sqrt(np(1-p)). A common rule of thumb is that this approximation works well when both np and n(1-p) are at least 5. A continuity correction of 0.5 improves accuracy.

Approximation to the Poisson

For a Poisson distribution with a large mean (lambda greater than about 20), the normal distribution with mean = lambda and standard deviation = sqrt(lambda) provides a good approximation. This is useful because the Poisson distribution does not have a simple closed-form CDF.

Relationship to the t-distribution

The t-distribution (Student's t) is used instead of the normal distribution when the population standard deviation is unknown and the sample size is small. As the degrees of freedom increase (larger sample sizes), the t-distribution converges to the standard normal distribution. For degrees of freedom above 30, the t-distribution is nearly identical to the normal.

Skewness, Kurtosis, and Normality

Real-world data rarely follows a normal distribution. Two key measures help quantify how data deviates from normality.

Skewness

Skewness measures the asymmetry of a distribution. A perfectly normal distribution has a skewness of 0. Positive skewness (right-skewed) means the right tail is longer, which is common in income data and housing prices. Negative skewness (left-skewed) means the left tail is longer, which sometimes occurs in exam scores when most students perform well.

Kurtosis

Kurtosis measures the "tailedness" of a distribution. The normal distribution has a kurtosis of 3 (or excess kurtosis of 0). Distributions with higher kurtosis (leptokurtic) have heavier tails and a sharper peak. This means extreme values are more likely than a normal distribution would predict. Financial returns often exhibit leptokurtic behavior, which is why normal distribution assumptions can underestimate financial risk.

Testing for normality

Several statistical tests can assess whether data follows a normal distribution. The Shapiro-Wilk test is one of the most for small to moderate sample sizes. The Anderson-Darling test is also widely used. Graphical methods like Q-Q (quantile-quantile) plots provide a visual assessment. If data is plotted against theoretical normal quantiles and falls approximately on a straight line, the data is approximately normal.

modern Z-Score Applications

Percentile ranks

Z-scores can be directly converted to percentile ranks using the Z-table. A Z-score of 1.28 corresponds to the 90th percentile, meaning the value is higher than 90% of the data. A Z-score of -0.84 corresponds to the 20th percentile. This is widely used in standardized testing, where scores are reported as percentiles. SAT scores, for example, come with a percentile rank that tells you what proportion of test takers scored below you.

Outlier detection

Values with Z-scores beyond a certain threshold are often flagged as outliers. Common thresholds are Z > 2 or Z > 3 (and Z < -2 or Z < -3). In a normal distribution, values with |Z| > 3 occur less than 0.3% of the time, so they are considered unusual. However, this method assumes the data is approximately normal. For non-normal data, other outlier detection methods (like IQR-based methods) may be more appropriate.

Process capability in manufacturing

Manufacturing quality uses Z-scores to calculate process capability indices. The Cpk index measures how well a process meets specification limits. A Cpk of 1.0 means the nearest specification limit is 3 standard deviations from the mean (3-sigma). A Cpk of 2.0 means the limit is 6 standard deviations away (6-sigma). Higher Cpk values indicate a more capable process with fewer defects.

Finance Value at Risk (VaR)

In financial risk management, VaR calculations use Z-scores to estimate potential portfolio losses. A 95% VaR uses Z = 1.645, meaning there is a 5% chance that losses will exceed the calculated amount on any given day. A 99% VaR uses Z = 2.326. While the normality assumption has limitations for financial returns (which tend to have fat tails), it remains a common starting point for risk assessment.

Properties of the Standard Normal Distribution

The standard normal distribution (Z-distribution) has several mathematical properties that make it the foundation of statistical inference.

References and External Resources

Browser Compatibility

This Z-table calculator and bell curve visualization works in all modern browsers including Chrome 138.0.6392.2852.0.6667.255+, Firefox 55+, Safari 11+, Edge 79+, and Opera 47+. The bell curve visualization requires Canvas support. It also works on mobile browsers for iOS and Android. JavaScript must be enabled for the interactive calculator and table generation features to work correctly.

History of the Normal Distribution

The normal distribution was first described by Abraham de Moivre in 1733 as an approximation to the binomial distribution. De Moivre showed that as the number of coin flips increases, the distribution of the number of heads approaches a smooth bell-shaped curve. He published this result in his book "The Doctrine of Chances."

Pierre-Simon Laplace used the normal distribution in his work on astronomical observations and the theory of errors in the late 18th century. He showed that the distribution of measurement errors tends to follow a bell curve, which is why it was initially called the "error curve" or "law of errors."

Carl Friedrich Gauss independently derived the normal distribution around 1809 in his work on least-squares estimation for astronomical orbit calculations. His contributions were so significant that the distribution is often called the "Gaussian distribution" in his honor. Gauss showed that if measurement errors follow a normal distribution, then the method of least squares provides the best estimate of the true value.

The term "normal distribution" was popularized by Karl Pearson and Francis Galton in the late 19th century. Galton was fascinated by the bell curve's appearance in biological measurements (height, weight, intelligence) and saw it as evidence of a "normal" or natural pattern in human traits. While the term "normal" might imply that this distribution is more natural or common than others, that is not necessarily the case. The name has stuck for historical reasons.

The mathematical properties of the normal distribution were further developed by many statisticians in the 19th and 20th centuries. The Central Limit Theorem, proved rigorously by Lindeberg and Levy in the 1920s, provided the theoretical foundation for why the normal distribution appears so frequently in practice.

Alternatives to the Z-Table

While the Z-table is a basic tool in statistics, several alternatives and extensions are available for situations where the standard normal distribution is not appropriate.

t-table (Student's t-distribution)

The t-distribution is used when the population standard deviation is unknown and must be estimated from the sample. This is the common situation in practice. The t-distribution has heavier tails than the normal distribution, reflecting the additional uncertainty from estimating the standard deviation. As the sample size increases, the t-distribution approaches the normal, and for large samples (n > 30), the two are nearly identical.

Chi-square table

The chi-square distribution is used in hypothesis tests about variance and in goodness-of-fit tests. It is always right-skewed and takes only positive values. The chi-square test statistic follows a chi-square distribution with degrees of freedom equal to (number of categories - 1) for goodness-of-fit tests or (rows - 1)(columns - 1) for tests of independence.

F-table

The F-distribution is the ratio of two chi-square distributions divided by their respective degrees of freedom. It is used in analysis of variance (ANOVA) to test whether the means of multiple groups are equal, and in regression analysis to test the overall significance of a model. Like the chi-square distribution, it takes only positive values.

Software calculators

Modern statistical software (R, Python's scipy, Excel, Google Sheets) can calculate exact probabilities for any distribution without needing printed tables. The functions norm.cdf() in Python, NORM.S.DIST() in Excel, and pnorm() in R all compute standard normal cumulative probabilities. These tools are more precise than tables (which are limited to 4 decimal places) and can handle any Z-score, not just those between -3.9 and 3.9.

Worked Examples

Example 1 Exam scores

A statistics class has exam scores that are normally distributed with a mean of 72 and a standard deviation of 8. What percentage of students scored above 88?

Solution: First calculate the Z-score: Z = (88 - 72) / 8 = 2.0. Look up Z = 2.0 in the table: P(Z < 2.0) = 0.9772. Therefore P(Z > 2.0) = 1 - 0.9772 = 0.0228, or about 2.28% of students scored above 88.

Example 2 Manufacturing tolerances

A factory produces bolts with a target diameter of 10mm and a standard deviation of 0.05mm. Bolts are rejected if they deviate more than 0.1mm from the target. What percentage of bolts are rejected?

Solution: Calculate the Z-score for the tolerance limit: Z = 0.1 / 0.05 = 2.0. We need P(|Z| > 2), which is the two-tail probability. P(|Z| > 2) = 2 x P(Z > 2) = 2 x 0.0228 = 0.0456, or about 4.56% of bolts are rejected.

Example 3 Confidence interval for a mean

A sample of 100 light bulbs has a mean lifespan of 1200 hours with a known population standard deviation of 100 hours. Construct a 95% confidence interval.

Solution: The standard error is 100 / sqrt(100) = 10. For 95% confidence, Z = 1.96. The margin of error is 1.96 x 10 = 19.6. The confidence interval is 1200 +/- 19.6, or (1180.4, 1219.6) hours.

Example 4 Hypothesis test

A company claims their battery lasts 500 hours on average. A sample of 36 batteries has a mean of 490 hours. The population standard deviation is 30 hours. At the 5% significance level, is there evidence that the mean battery life is less than 500 hours?

Solution: H0: mu = 500, H1: mu < 500 (one-tailed test). Z = (490 - 500) / (30 / sqrt(36)) = -10 / 5 = -2.0. The critical value for a one-tailed test at alpha = 0.05 is Z = -1.645. Since -2.0 < -1.645, we reject H0. There is evidence at the 5% level that the mean battery life is less than 500 hours. The p-value is P(Z < -2.0) = 0.0228.

Sampling Distributions

A sampling distribution is the probability distribution of a statistic (like the mean or proportion) calculated from repeated random samples. Understanding sampling distributions is critical for using Z-scores in inference.

Distribution of the sample mean

If you take many samples of size n from a population and calculate the mean of each sample, those sample means form a distribution. By the Central Limit Theorem, this distribution is approximately normal for large n. The mean of the sampling distribution equals the population mean (mu), and the standard deviation (standard error) equals sigma divided by the square root of n.

This has a practical consequence: larger samples produce more precise estimates. Quadrupling the sample size cuts the standard error in half. This is why researchers use large sample sizes to detect small effects.

Distribution of the sample proportion

When working with proportions (like the percentage of voters supporting a candidate), the sampling distribution of the sample proportion is also approximately normal for large samples. The standard error of a proportion is sqrt(p(1-p)/n), where p is the true population proportion.

A Z-test for proportions uses this standard error to calculate the Z-statistic: Z = (p-hat - p0) / sqrt(p0(1-p0)/n), where p-hat is the observed sample proportion and p0 is the hypothesized population proportion.

Type I and Type II Errors

Statistical hypothesis testing involves a trade-off between two types of errors, both of which are directly related to the Z-scores and probabilities in the table above.

Type I error (false positive)

A Type I error occurs when you reject a true null hypothesis. The probability of a Type I error is denoted alpha and equals the significance level you choose for the test. If you use alpha = 0.05 and the critical Z-score is 1.96 (two-tailed), there is a 5% chance of incorrectly rejecting H0 when it is actually true. Choosing a smaller alpha (like 0.01) reduces the risk of false positives but makes it harder to detect true effects.

Type II error (false negative)

A Type II error occurs when you fail to reject a false null hypothesis. The probability of a Type II error is denoted beta. Power (1 - beta) is the probability of correctly rejecting a false null hypothesis. Power increases with larger sample sizes, larger effect sizes, and larger alpha levels. A well- study typically aims for power of at least 0.80, meaning there is an 80% chance of detecting a true effect.

The alpha-beta trade-off

For a fixed sample size and effect size, reducing alpha (making the test more conservative) increases beta (making it harder to detect true effects). The only way to simultaneously reduce both is to increase the sample size. This trade-off is basic to the design of experiments and clinical trials.

Effect Sizes and Z-Scores

While statistical significance (determined by Z-scores and p-values) tells you whether an effect exists, effect size tells you how large the effect is. A result can be statistically significant but practically meaningless if the effect size is tiny.

Cohen's d is a common effect size measure that is directly related to Z-scores. It measures the difference between two means in units of standard deviations: d = (mean1 - mean2) / pooled standard deviation. Cohen's guidelines classify d = 0.2 as small, d = 0.5 as medium, and d = 0.8 as large, though these benchmarks vary by field.

The relationship between effect size, sample size, and statistical significance is important. With a very large sample size, even a tiny effect (d = 0.01) can produce a significant Z-score. Conversely, with a small sample, even a large effect might not reach significance. This is why reporting effect sizes alongside p-values is considered best practice in modern statistics.

Power analysis uses Z-scores and effect sizes to determine the sample size needed for a study. Given an expected effect size, desired power level, and significance level, you can calculate the minimum sample size. For a two-sample Z-test with alpha = 0.05, power = 0.80, and d = 0.5, you need approximately 64 subjects per group.

Dealing with Non-Normal Data

Not all data follows a normal distribution. When data is non-normal, the Z-table may not directly apply. Here are common approaches for handling non-normal data in statistical analysis.

Data transformations

Applying a mathematical transformation can sometimes make non-normal data approximately normal. Common transformations include the logarithmic transformation (useful for right-skewed data), the square root transformation (useful for count data), and the Box-Cox transformation (a family of transformations that finds the best normalizing power). After transformation, standard Z-tests and confidence intervals can be applied to the transformed data.

Nonparametric tests

Nonparametric statistical tests do not assume normality. The Mann-Whitney U test replaces the two-sample Z-test, the Wilcoxon signed-rank test replaces the paired Z-test, and the Kruskal-Wallis test replaces one-way ANOVA. These tests work with ranks rather than raw values and are valid for any distribution shape. The trade-off is slightly lower statistical power compared to their parametric counterparts when the data actually is normal.

Bootstrapping

Bootstrap methods create a sampling distribution empirically by resampling from the observed data with replacement. Instead of assuming the sampling distribution is normal, bootstrapping estimates it directly from the data. A 95% bootstrap confidence interval uses the 2.5th and 97.5th percentiles of the bootstrap distribution. This approach works for any distribution shape and any statistic, making it extremely adaptable.

Resistant statistical methods

Resistant methods are to work well even when assumptions are violated. Trimmed means (which discard a percentage of extreme values before averaging) are less affected by outliers than the regular mean. The median is an extreme trim that is highly resistant to non-normality. Resistant standard errors (like Huber-White sandwich estimators) produce valid inference in regression even when error distributions are non-normal.

Standard Scores in Standardized Testing

Many standardized tests use transformed Z-scores to report results in more formats. Understanding the transformation helps you interpret scores and convert between scales.

IQ scores use a mean of 100 and standard deviation of 15: IQ = 100 + 15 x Z. An IQ of 130 corresponds to Z = 2.0, placing a person at the 97.7th percentile. An IQ of 85 corresponds to Z = -1.0, at the 15.9th percentile. This transformation ensures most people fall between 70 and 130, making the numbers to interpret.

SAT scores (before the 2016 redesign) used a mean of 500 and standard deviation of 100 per section: SAT = 500 + 100 x Z. A score of 700 meant Z = 2.0, or the 97.7th percentile. The current SAT uses a different scoring system but still normalizes results to follow a roughly bell-shaped distribution.

T-scores (used in some psychological assessments) use a mean of 50 and standard deviation of 10: T = 50 + 10 x Z. A T-score of 70 corresponds to Z = 2.0, and a T-score of 30 corresponds to Z = -2.0. This is commonly used in personality testing and behavioral assessment instruments.

Frequently Asked Questions

What is a Z-score?

A Z-score (also called a standard score) measures how many standard deviations a data point is from the mean of a distribution. A Z-score of 0 means the value equals the mean. A Z-score of 1.5 means the value is 1.5 standard deviations above the mean.

How do you read a Z-table?

Find the row for the first decimal place (e.g., 1.5) and the column for the second decimal place (e.g., 0.04). The intersection gives you the cumulative probability P(Z < 1.54). For example, P(Z < 1.54) = 0.9382.

What is the standard normal distribution?

The standard normal distribution is a normal distribution with a mean of 0 and a standard deviation of 1. Any normal distribution can be converted to the standard normal by calculating Z-scores: Z = (X - mean) / standard deviation.

What is a cumulative probability?

Cumulative probability P(Z < z) is the probability that a standard normal random variable takes a value less than or equal to z. It represents the area under the bell curve to the left of that Z-score.

What is the 68-95-99.7 rule?

The 68-95-99.7 rule states that approximately 68% of data falls within 1 standard deviation of the mean, 95% within 2 standard deviations, and 99.7% within 3 standard deviations in a normal distribution.

How do you calculate right-tail probability?

Right-tail probability P(Z > z) = 1 minus the cumulative probability. Look up the Z-score in the table to get P(Z < z), then subtract it from 1.

What is two-tail probability used for?

Two-tail probability is used in two-sided hypothesis tests. It measures the probability of getting a Z-score at least as extreme as the observed value in either direction. It equals 2 times the one-tail probability for the absolute value of Z.

What Z-score corresponds to a 95% confidence level?

For a 95% confidence interval, the critical Z-score is 1.96. This means 95% of the area under the standard normal curve falls between Z = -1.96 and Z = +1.96.

Last updated: March 19, 2026

Last verified working: March 21, 2026 by Michael Lip

Update History

March 19, 2026 - Published initial tool with core logic March 23, 2026 - Expanded FAQ section and added breadcrumb schema March 25, 2026 - Cross-browser testing and edge case fixes

Video Guide: Normal Distribution Table