We often summarize distributions with statistics: functions of data. The most basic way to do this is with summary, whose values can all be calculated individually.
Code
# A random sample (real data)X <- USArrests[,'Murder']summary(X)## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.800 4.075 7.250 7.788 11.250 17.400# A random sample (computer simulation)X <-runif(1000)summary(X)## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.0001145 0.2517867 0.4938861 0.4997452 0.7368083 0.9996801# Another random sample (computer simulation)X <-rnorm(1000) summary(X)## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -3.88364 -0.61378 0.04229 0.04504 0.69259 4.31859
Together, the sample mean and variance statistics summarize the central tendency and dispersion of a distribution. In some special cases, such as with the normal distribution, they completely describe the distribution. Other distributions are better described with other statistics, either as an alternative or in addition to the mean and variance. After discussing those other statistics, we will return to the two most basic statistics in theoretical detail.
3.1 Mean and Variance
The mean and variance are the two most basic statistics that summarize the center and how spread apart the values are for data in your sample. As before, we represent data as vector \(\hat{X}=(\hat{X}_{1}, \hat{X}_{2}, ....\hat{X}_{n})\), where there are \(n\) observations and \(\hat{X}_{i}\) is the value of the \(i\)th one.
Mean.
Perhaps the most common statistic is the empirical mean, also known as the sample mean, which is the [sum of all values] divided by [number of values]; \[\begin{eqnarray}
\hat{M} &=& \frac{\sum_{i=1}^{n}\hat{X}_{i}}{n},
\end{eqnarray}\] where \(\hat{X}_{i}\) denotes the value of the \(i\)th observation.
Note
For example, a dataset of \(\{1,4,10\}\) has a mean of \([1+4+10]/3=5\).
Code
X <-c(1,4,10)sum(X)/length(X)## [1] 5mean(X)## [1] 5
Code
# compute the mean of a random sampleX1 <- USArrests[,'Murder']X1_mean <-mean(X1)X1_mean## [1] 7.788# visualize on a histogramhist(X1, border=NA, main=NA)abline(v=X1_mean, col=2, lwd=2)title(paste0('mean= ', round(X1_mean,2)), font.main=1)
Variance.
Perhaps the second most common statistic is the empirical variance: the average squared deviation from the mean \[\begin{eqnarray}
\hat{V} &=&\frac{\sum_{i=1}^{n} [\hat{X}_{i} - \hat{M}]^2}{n}.
\end{eqnarray}\] The empirical standard deviation is simply \(\hat{S} = \sqrt{\hat{V} }\).
Note
For example, a dataset of \(\{1,4,10\}\) has a mean of \([1+4+10]/3=5\). The variance is \([(1-5)^2+(4-5)^2+(10-5)^2]/3=14\) and the standard deviation is \(\sqrt{14}\).
Code
X <-c(1,4,10)X_mean <-mean(X)X_var <-mean( (X - X_mean)^2 )sqrt(X_var)## [1] 3.741657
Note that a “unbiased version” of the empirical variance is used by R and many statisticians: \(\hat{V}' =\frac{\sum_{i=1}^{n} [\hat{X}_{i} - \hat{M}]^2}{n-1}\) and \(\hat{S}' = \sqrt{\hat{V}'}\). In this class, we use the version defined previously because we do not yet know about “bias/unbiased” statistics. Do not be concerned, as there is hardly any difference when \(n\) is large: e.g., \(\frac{1}{n}\approx \frac{1}{n-1}\) for \(n=100,000\).
A general rule of applied statistics is that there are multiple ways to measure something. Mean and Variance are measurements of Center and Spread, but there are others that have different theoretical properties and may be better suited for your dataset.
Medians and Absolute Deviations.
We can use the empirical Median as a “robust alternative” to means that is especially useful for data with asymmetric distributions and extreme values. Recall that the \(q\)th quantile is the value where \(q\) percent of the data are below and (\(1-q\)) percent are above. The median (\(q=.5\)) is the point where half of the data is lower values and the other half is higher. This implies that median is not sensitive to extreme values (whereas the mean is).
X1_extreme <-c(X1, 1000) # add one extreme value#par(mfrow=c(1,2)) # visualize side-by-side#hist(X1)#hist(X1_extreme)# Which measures of central tendency are robust# to a single extreme value?mean( X1)## [1] 7.788mean( X1_extreme )## [1] 27.24314quantile(X1, prob=0.5)## 50% ## 7.25quantile(X1_extreme, prob=0.5)## 50% ## 7.3
We can also use the sample Interquartile Range or Median Absolute Deviation as an alternative to variance. The difference between the first and third quartiles (quantiles \(q=.25\) and \(q=.75\)) measure is range of the middle \(50%\) of the data, which is how the boxplot measures “spread”. The median absolute deviation is another statistic that also measures “spread”. \[\begin{eqnarray}
\tilde{M} &=& \text{Med}( \hat{X}_{i}) \\
\hat{\text{MAD}} &=& \text{Med}\left( | \hat{X}_{i} - \tilde{M} | \right).
\end{eqnarray}\]
Code
IQR(X1)## [1] 7.175mad(X1)## [1] 5.41149
Note
Compute the \(IQR\) statistic for the dataset \(\{-100,1,4,10,10\}\).
X <-c(1,4,10)# An intuitive alternative to sd(X), used in the boxplotquants <-quantile(X, probs=c(.25,.75))quants[2]-quants[1]## 75% ## 4.5IQR(X)## [1] 4.5
Compute the \(\text{MAD}\) statistic for the dataset \(\{1,4,10\}\). First compute the median, \(\text{Med}(1,4,10)=4\). Then compute \(\text{Med}( |1-4|,~ |4-4|,~ |10-4| )= \text{Med}( 3, 0, 6 ) = 3\).
Code
#Another alternative to sd(X)mad(X, constant=1)## [1] 3# Computationally equivalent# x_med <- quantile(X, probs=.5)# x_absdev <- abs(X -x_med)# quantile(x_absdev, probs=.5)
Note
Compare the robustness of various “spread” metrics to an extreme value
Note that there other “absolute deviation” statistics
Code
# sometimes seen elsewheremean( abs(X1 -mean(X1)) )mean( abs(X1 -median(X1)) )
Weighted Statistics.
The mean generalizes to a weighted mean: an average where different values contribute to the final result with varying levels of importance. For each outcome \(x\) we have a weight \(W_{x}\) and compute \[\begin{eqnarray}
\hat{M} &=& \frac{\sum_{x} x W_{x}}{\sum_{x} W_{x}} = \sum_{x} x w_{x},
\end{eqnarray}\] where \(w_{x}=\frac{W_{x}}{\sum_{x'} W_{x'}}\) is normalized version of \(W_{x}\) that implies \(\sum_{x}w_{x}=1\).1
Sometimes, none of the above work well. With categorical data, for example, distributions are easier to describe with other statistics. The sample mode is the most common observation: the value with the highest observed frequency. We can also measure the spread of the frequencies or concentration at the mode vs elsewhere.
Note
Here is the proportion of murder rates in \((2,4)\).
Central tendency and dispersion are often insufficient to describe a distribution. To further describe shape, we can compute sample skew and kurtosis to measure asymmetry and extreme values.
There are many other statistics we could compute on an ad-hoc basis. However, shape is often best understood with graphical descriptions: histogram, ECDF, Boxplot. These should be made before numerical descriptions: skewness and kurtosis statistics.
Skewness.
The skewness statistic captures how asymmetric the distribution is by measuring the average cubed deviation from the mean, normalized the standard deviation cubed \[\begin{eqnarray}
\frac{\sum_{i=1}^{n} [\hat{X}_{i} - \hat{M}]^3 / n}{ \hat{S}^3 }
\end{eqnarray}\]
Code
X <-rweibull(1000, shape=1)hist(X, border=NA, main=NA, freq=F, breaks=20)
We can automatically compare against the normal distribution, which has a skew of 0.
Kurtosis.
This statistic captures how many “outliers” there are like skew but looking at quartic deviations instead of cubed ones. \[\begin{eqnarray}
\frac{\sum_{i=1}^{n} [\hat{X}_{i} - \hat{M}]^4 / n}{ \hat{S}^4 }.
\end{eqnarray}\] Some authors further subtract \(3\) to explicitly compare against the normal distribution (the normal distribution has a kurtosis of \(3\)).
Code
X <-rweibull(1000, shape=1)boxplot(X, main=NA)
Code
kurtosis <-function(X){ X_mean <-mean(X) m4 <-mean((X - X_mean)^4) s4 <-sd(X)^4 kurt <- m4/s4return(kurt)# use instead to compare against normal# excess_kurt <- kurt - 3 }kurtosis( rweibull(1000, shape=1) )## [1] 6.171985kurtosis( rweibull(1000, shape=10) )## [1] 3.305289
Clusters/Gaps.
You can also describe distributions in terms of how clustered the values are, including the number of modes, bunching, and many other statistics. But remember that “a picture is worth a thousand words”.
Transformations can stabilize variance, reduce skewness, and make model errors closer to Gaussian.
Perhaps the most common examples are power transformations: \(y= x^\lambda\), which includes \(\sqrt{x}\) and \(x^2\).
Other examples include the exponential transformation: \(y=\exp(x)\) for any \(x\in (-\infty, \infty)\) and logarithmic transformation: \(y=\log(x)\) for any \(x>0\).
Tip
The exponential function is \(e^{x}=1+x/1+2^2/2+x^3/6....=\sum_{k=0}^{\infty} x^k/k!\) and the \(log\) function is it’s inverse.
The Box–Cox Transform nests many cases used by statisticians. For \(x>0\) and parameter \(\lambda\), \[\begin{eqnarray}
y=\begin{cases}
\dfrac{x^\lambda-1}{\lambda}, & \lambda\neq 0,\\
\log\left(x\right) & \lambda=0.
\end{cases}
\end{eqnarray}\] This function is continuous over \(\lambda\).
par(mfrow=c(1,3))for(lambda inc(-1,0,1)){ Y <-bc_transform(X, lambda)hist(Y, main=bquote(paste(lambda,'=',.(lambda))),border=NA, freq=F)}
Be careful about transforming your data, as the interpretation can be harder.
The mean of transformed data is not equivalent to transforming the mean, for example, which is one reason why you want to first summarize your data before transforming it.
Jensen’s inequality.
We can actually say more about how the mean of transformed data not equivalent to transforming the mean. To do that, we define two types of functions:
Concave functions curve inwards, like the inside of a cave.
Convex functions curve outward, the opposite of concave.
Let \(\hat{Y}_{i}=g( \hat{X}_{i})\), and denote the mean as \(\hat{M}_{Y}\).
If \(g\) is a concave function, then \(g( \hat{M}_{X} ) \geq \hat{M}_{Y}\).
Note that if there are \(K\) unique outcomes and \(W_{x}=1\) then \(\sum_{x}W_{x}=K\) and \(w_{x}=1/K\). This means \(\hat{M} = \sum_{x} x w_{x} = \sum_{x} x /K\), which is just a simple mean.↩︎