3  Data


3.1 Types

Basic Types.

The two basic types of data are cardinal (aka numeric) data and factor data. We can further distinguish between whether cardinal data are discrete or continuous. We can also further distinguish between whether factor data are ordered or not

  • Cardinal (Numeric): the difference between elements always means the same thing.
    • Discrete: E.g. \(\{ 1,2,3\}\) and notice that \(2-1=3-2\).
    • Continuous: E.g., \(\{1.4348, 2.4348, 2.9, 3.9 \}\) and notice that \(2.9-1.4348=3.9-2.4348\)
  • Factor: the difference between elements does not always mean the same thing.
    • Ordered: E.g., \(\{1^{st}, 2^{nd}, 3^{rd}\}\) place in a race and notice that \(1^{st}\) - \(2^{nd}\) place does not equal \(2^{nd}\) - \(3^{rd}\) place for a very competitive person who cares only about winning.
    • Unordered (categorical): E.g., \(\{Amanda, Bert, Charlie\}\) and notice that \(Amanda - Bert\) never makes sense.

Here are some examples

Code
dat_card1 <- 1:3 # Cardinal data (Discrete)
dat_card1
## [1] 1 2 3

dat_card2 <- c(1.1, 2/3, 3) # Cardinal data (Continuous)
dat_card2
## [1] 1.1000000 0.6666667 3.0000000

dat_fact1 <- factor( c('A','B','C'), ordered=T) # Factor data (Ordinal)
dat_fact1
## [1] A B C
## Levels: A < B < C

dat_fact2 <- factor( c('Leipzig','Los Angeles','Logan'), ordered=F) # Factor data (Categorical)
dat_fact2
## [1] Leipzig     Los Angeles Logan      
## Levels: Leipzig Logan Los Angeles

dat_fact3 <- factor( c(T,F), ordered=F) # Factor data (Categorical)
dat_fact3
## [1] TRUE  FALSE
## Levels: FALSE TRUE

# Explicitly check the data types:
#class(dat_card1)
#class(dat_card2)

Note that for theoretical analysis, the types are sometimes grouped differently as

  • Discrete (discrete cardinal, ordered factor, and unordered factor data). You can count the potential values. E.g., the set \(\{A,B,C\}\) has three potential values.
  • Continuous (continuous cardinal data). There are uncountably infinite potential values. E.g., Try counting the numbers between \(0\) and \(1\) including decimal points, and notice that any two numbers have another potential number between them.

In any case, data are often computationally analyzed as data.frame objects, discussed below.

Strings.

Note that R allows for unstructured plain text, called character strings, which we can then format as factors

Code
c('A','B','C')  # character strings
## [1] "A" "B" "C"
c('Leipzig','Los Angeles','Logan')  # character strings
## [1] "Leipzig"     "Los Angeles" "Logan"

Also note that strings are encounter in a variety of settings, and you often have to format them after reading them into R.1

Code
# Strings
paste( 'hi', 'mom')
## [1] "hi mom"
paste( c('hi', 'mom'), collapse='--')
## [1] "hi--mom"

kingText <- "The king infringes the law on playing curling."
gsub(pattern="ing", replacement="", kingText)
## [1] "The k infres the law on play curl."
# advanced usage
#gsub("[aeiouy]", "_", kingText)
#gsub("([[:alpha:]]{3})ing\\b", "\\1", kingText) 

3.2 Datasets

Datasets can be stored in a variety of formats on your computer. But they can be analyzed in R in three basic ways.

Lists.

Lists are probably the most basic type

Code
x <- 1:10
y <- 2*x
list(x, y)  # list of vectors
## [[1]]
##  [1]  1  2  3  4  5  6  7  8  9 10
## 
## [[2]]
##  [1]  2  4  6  8 10 12 14 16 18 20

x_mat1 <- matrix(2:7,2,3)
x_mat2 <- matrix(4:-1,2,3)
list(x_mat1, x_mat2)  # list of matrices
## [[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## [[2]]
##      [,1] [,2] [,3]
## [1,]    4    2    0
## [2,]    3    1   -1

Lists are useful for storing unstructured data

Code
list(list(x_mat1), list(x_mat2))  # list of lists
## [[1]]
## [[1]][[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## 
## [[2]]
## [[2]][[1]]
##      [,1] [,2] [,3]
## [1,]    4    2    0
## [2,]    3    1   -1

list(x_mat1, list(x_mat1, x_mat2)) # list of different objects
## [[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## [[2]]
## [[2]][[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## [[2]][[2]]
##      [,1] [,2] [,3]
## [1,]    4    2    0
## [2,]    3    1   -1

# ...inception...
list(x_mat1,
    list(x_mat1, x_mat2), 
    list(x_mat1, list(x_mat2)
    )) 
## [[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## [[2]]
## [[2]][[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## [[2]][[2]]
##      [,1] [,2] [,3]
## [1,]    4    2    0
## [2,]    3    1   -1
## 
## 
## [[3]]
## [[3]][[1]]
##      [,1] [,2] [,3]
## [1,]    2    4    6
## [2,]    3    5    7
## 
## [[3]][[2]]
## [[3]][[2]][[1]]
##      [,1] [,2] [,3]
## [1,]    4    2    0
## [2,]    3    1   -1

Data.frames.

A data.frame looks like a matrix but each column is actually a list rather than a vector. This allows you to combine different data types into a single object for analysis, which is why it might be your most common object.

Code
# data.frames: your most common data type
    # matrix of different data-types
    # well-ordered lists
data.frame(x, y)  # list of vectors
##     x  y
## 1   1  2
## 2   2  4
## 3   3  6
## 4   4  8
## 5   5 10
## 6   6 12
## 7   7 14
## 8   8 16
## 9   9 18
## 10 10 20

Create a data.frame storing two different types of data. Then show print only the second column

Code
d0 <- data.frame(x=dat_fact2, y=dat_card2)
d0
##             x         y
## 1     Leipzig 1.1000000
## 2 Los Angeles 0.6666667
## 3       Logan 3.0000000

d0[,'y']
## [1] 1.1000000 0.6666667 3.0000000

Arrays.

Arrays are generalization of matrices to multiple dimensions. They are a very efficient way to store well-formatted numeric data, and are often used in spatial econometrics and time series (often in the form of “data cubes”).

Code
# data square (matrix)
array(data = 1:24, dim = c(3,8))
##      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
## [1,]    1    4    7   10   13   16   19   22
## [2,]    2    5    8   11   14   17   20   23
## [3,]    3    6    9   12   15   18   21   24

# data cube
a <- array(data = 1:24, dim = c(3, 2, 4))
a
## , , 1
## 
##      [,1] [,2]
## [1,]    1    4
## [2,]    2    5
## [3,]    3    6
## 
## , , 2
## 
##      [,1] [,2]
## [1,]    7   10
## [2,]    8   11
## [3,]    9   12
## 
## , , 3
## 
##      [,1] [,2]
## [1,]   13   16
## [2,]   14   17
## [3,]   15   18
## 
## , , 4
## 
##      [,1] [,2]
## [1,]   19   22
## [2,]   20   23
## [3,]   21   24
Code
a[1, , , drop = FALSE]  # Row 1
#a[, 1, , drop = FALSE]  # Column 1
#a[, , 1, drop = FALSE]  # Layer 1

a[ 1, 1,  ]  # Row 1, column 1
#a[ 1,  , 1]  # Row 1, "layer" 1
#a[  , 1, 1]  # Column 1, "layer" 1
a[1 , 1, 1]  # Row 1, column 1, "layer" 1

Apply extends to arrays

Code
apply(a, 1, mean)    # Row means
## [1] 11.5 12.5 13.5
apply(a, 2, mean)    # Column means
## [1] 11 14
apply(a, 3, mean)    # "Layer" means
## [1]  3.5  9.5 15.5 21.5
apply(a, 1:2, mean)  # Row/Column combination 
##      [,1] [,2]
## [1,]   10   13
## [2,]   11   14
## [3,]   12   15

Outer products yield arrays

Code
x <- c(1,2,3)
x_mat1 <- outer(x, x) # x %o% x
x_mat1
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]    2    4    6
## [3,]    3    6    9
is.array(x_mat1) # Matrices are arrays
## [1] TRUE

x_mat2 <- matrix(6:1,2,3)
outer(x_mat2, x)
## , , 1
## 
##      [,1] [,2] [,3]
## [1,]    6    4    2
## [2,]    5    3    1
## 
## , , 2
## 
##      [,1] [,2] [,3]
## [1,]   12    8    4
## [2,]   10    6    2
## 
## , , 3
## 
##      [,1] [,2] [,3]
## [1,]   18   12    6
## [2,]   15    9    3
# outer(x_mat2, matrix(x))
# outer(x_mat2, t(x))
# outer(x_mat1, x_mat2)

3.3 Densities and Distributions

Initial Data Inspection.

Regardless of the data types you have, you typically begin by inspecting your data by examining the first few observations.

Consider, for example, historical data on crime in the US.

Code
head(USArrests) # Actual Data
##            Murder Assault UrbanPop Rape
## Alabama      13.2     236       58 21.2
## Alaska       10.0     263       48 44.5
## Arizona       8.1     294       80 31.0
## Arkansas      8.8     190       50 19.5
## California    9.0     276       91 40.6
## Colorado      7.9     204       78 38.7

# Check NA values
X <- c(3,3.1,NA,0.02) #Small dataset we will use in numerical examples
sum(is.na(X))
## [1] 1

To further examine a particular variable, we look at its distribution. In what follows, we will often work with data as vector \(\hat{X}=(\hat{X}_{1}, \hat{X}_{2}, ....\hat{X}_{n})\), where there are \(n\) observations and \(\hat{X}_{i}\) is the value of the \(i\)th one. We often analyze observations in comparison to some value \(x\).

Code
X <- c(3,3.1,0.02) # Data: "big X"
x <- 2 # particular value: "little x"
sum(X <= x)
## [1] 1
sum(X == x)
## [1] 0

Histogram Density Estimate.

The histogram measures the proportion of the data in different bins. It does so by dividing the range of the data into exclusive bins of equal-width \(h\), and count the number of observations within each bin. We often rescale the counts, dividing by the number of observations \(n\) multiplied by bin-width \(h\), so that the total area of the histogram sums to one, which allows us to interpret the numbers as a density.

Mathematically, for an exclusive bin \(\left(x-\frac{h}{2}, x+\frac{h}{2} \right]\) defined by their midpoint \(x\) and width \(h\), we compute \[\begin{eqnarray} \widehat{f}(x) &=& \frac{ \sum_{i=1}^{n} \mathbf{1}\left( \hat{X}_{i} \in \left(x-\frac{h}{2}, x+\frac{h}{2} \right] \right) }{n h}, \end{eqnarray}\] where \(\mathbf{1}()\) is an indicator function that equals \(1\) if the expression inside is TRUE and \(0\) otherwise. E.g., if \(\hat{X}_{i}=3.8\) and \(h=1\), then for \(x=1\) we have \(\mathbf{1}\left( \hat{X}_{i} \in \left(1-\frac{h}{2}, 1+\frac{h}{2} \right] \right)=\mathbf{1}\left( 3.8 \in \left(0.5, 1.5\right] \right)=0\) and for \(x=4\) \(\mathbf{1}\left( \hat{X}_{i} \in \left(4-\frac{h}{2}, 4+\frac{h}{2} \right] \right)=\mathbf{1}\left( 3.8 \in \left(3.5, 4.5\right] \right)=1\).

Note that the area of the rectangle is “base x height”, which is \(h \times \widehat{f}(x)= \sum_{i=1}^{n} \mathbf{1}\left( \hat{X}_{i} \in \left(x-\frac{h}{2}, x+\frac{h}{2} \right] \right) /n\). This means the rectangle area equals the proportion of data in the bin. We compute \(\widehat{f}(x)\) for each bin midpoint \(x\), and the area of all rectangles sums to one.2

For example, consider the dataset \(\{3,3.1,0.02\}\) and use bins \((0,1], (1,2], (2,3], (3,4]\). In this case, the midpoints are \(x=(0.5,1.5,2.5,3.5)\) and \(h=1\). Then the counts at each midpoints are \((1,0,1,1)\). Since \(\frac{1}{nh}=\frac{1}{3\times1}=\frac{1}{3}\), we can rescale the counts to compute the density as \(\widehat{f}(x)=(1,0,1,1) \frac{1}{3}=(1/3,0,1/3,1/3)\).

Code
# Intuitive Examples
X <- c(3,3.1,0.02)
Xhist <- hist(X, breaks=c(0,1,2,3,4), plot=F)
Xhist
## $breaks
## [1] 0 1 2 3 4
## 
## $counts
## [1] 1 0 1 1
## 
## $density
## [1] 0.3333333 0.0000000 0.3333333 0.3333333
## 
## $mids
## [1] 0.5 1.5 2.5 3.5
## 
## $xname
## [1] "X"
## 
## $equidist
## [1] TRUE
## 
## attr(,"class")
## [1] "histogram"

# base x height
base <- 1
height <- Xhist$density
sum(base*height)
## [1] 1

For another example, use the bins \((0,2]\) and \((2,4]\). So the midpoints are \(1\) and \(3\), and the bin width is \(2\). Only one observation, \(0.02\), falls in the bin \((0,2]\). The other two observations, \(3\) and \(3.1\), fall into the bin \((2,4]\). The scaling factor is \(\frac{1}{nh}=\frac{1}{3\times 2}=\frac{1}{6}\). So the first bin has density \(f(1)=1\frac{1}{6}=1/6\) and the second bin has density \(f(3)=2\frac{1}{6}=2/6\). The area of the first bin’s rectangle is \(2 \times f(1)=2/6=1/3\) and the area of the second rectangle is \(2 \times f(3)=4/6=2/3\).

Now intuitively work through an example with three bins instead of four. Compute the areas

Code
Xhist <- hist(X, breaks=c(0,4/3,8/3,4), plot=F)
base <- 4/3
height <- Xhist$density
sum(base*height)
## [1] 1


# as a default, R uses bins (,] instead of [,)
# but you can change that with "right=F"
# hist(X, breaks=c(0,4/3,8/3,4), plot=F, right=F)
Code
# Practical Example
X <- USArrests[,'Murder']
hist(X,
    breaks=seq(0,20,by=1), #bin width=1
    freq=F,
    border=NA, 
    main='',
    xlab='Murder Arrests')
# Raw Observations
rug(USArrests[,'Murder'], col=grey(0,.5))

Code

# Since h=1, the density equals the proportion of states in each bin
# Redo this example with h=2

Note that if you your data are factor data, or discrete cardinal data, you can directly plot the proportions in a bar plot. For each unique outcome \(x\) we compute \[\begin{eqnarray} \widehat{p}_{x}=\sum_{i=1}^{n}\mathbf{1}\left(\hat{X}_{i}=x\right)/n. \end{eqnarray}\] where \(n\) is the number of observations and \(\sum_{i=1}^{n}\mathbf{1}\left(\hat{X}_{i}=x\right)\) counts the number of observations equal to \(x\). The height of each line equals the proportion of data with a specific value.

Code
# Discretized data
X <- USArrests[,'Murder']
Xr <- floor(X) #rounded down
#table(Xr)
proportions <- table(Xr)/length(Xr)
plot(proportions, col=grey(0,.5),
    xlab='Murder Arrests (Discretized)',
    ylab='Proportion of States with each value')

Empirical Cumulative Distribution Function.

The ECDF counts the proportion of observations whose values are less than or equal to \(x\); \[\begin{eqnarray} \widehat{F}(x) = \frac{1}{n} \sum_{i}^{n} \mathbf{1}( \hat{X}_{i} \leq x). \end{eqnarray}\] Typically, we compute this for each unique value of \(x\) in the dataset. The ECDF jumps up by \(1/n\) at each of the \(n\) data points.

For example, let \(X=(3,3.1,0.02)\). We reorder the observations as \((0.02, 3, 3.1)\), so that there are discrete jumps of \(1/n=1/3\) at each value. Consider the points \(x \in \{0.5,2.5,3.5\}\). At \(x=0.5\), \(F(0.5)\) measures the proportion of the data \(\leq 0.5\). Since only one observations, \(0.02\), of three is \(\leq 0.5\), we can compute \(F(0.5)=1/3\). Similarly, since only one observations, \(0.02\), of three is \(\leq 2.5\), we can compute \(F(2.5)=1/3\). Since all observations are \(\leq 3.5\), we can compute \(F(3.5)=1\).

Code
X <- c(3,3.1,0.02)
Fhat <- ecdf(X)

# Visualized
plot(Fhat)

Code

# Evaluated at the data
x <- X
Fhat(X)
## [1] 0.6666667 1.0000000 0.3333333
#sum(X<=3)/length(X)
#sum(X<=3.1)/length(X)
#sum(X<=0.02)/length(X)

# Evaluated at other points
x <- c(0.5, 2.5, 3.5)
Fhat(x)
## [1] 0.3333333 0.3333333 1.0000000
#sum(X<=0.5)/length(X)
#sum(X<=2.5)/length(X)
#sum(X<=3.5)/length(X)
Code
F_murder <- ecdf(USArrests[,'Murder'])
# proportion of murders <= 10
F_murder(10)
## [1] 0.7
# proportion of murders <= x, for all x
plot(F_murder, main='', 
    xlab='Murder Arrests (x)',
    ylab='Proportion of States with Murder Arrests <= x',
    pch=16, col=grey(0,.5))
rug(USArrests[,'Murder'])

Quantiles.

You can summarize the distribution of data using quantiles: the \(p\)th quantile is a value of \(x\) where \(p\) percent of the data are below \(x\) and (\(1-p\)) percent are above \(x\).

  • The min is the smallest value (or the most negative value if there are any), where \(0%\) of the data has lower values.
  • The median is the middle value, where one half of the data has lower values and the other half has higher values.
  • The max is the smallest value (or the most negative value if there are any), where \(100%\) of the data has lower values.

For example, if \(X=(0,0,0.02,3,5)\) then the median is \(0.02\), the lower quartile is \(0\), and the upper quartile is \(3\). (The number \(0\) is also special: the most frequent observation is called the mode.)

Code
X <-  c(3.1, 3, 0.02)
quantile(X, probs=c(0,.5,1))
##   0%  50% 100% 
## 0.02 3.00 3.10

Now work through an intuitive example with observations \(\{1,2,...,13\}\). Hint: split the ordered observations into four groups.

We are often also interested in quartiles: where \(25\%\) of the data fall below \(x\) (lower quartile) and where \(75\%\) of the data fall below \(x\) (upper quartile). Sometimes we are also interested in deciles: where \(10\%\) of the data fall below \(x\) (lower decile) and where \(90\%\) of the data fall below \(x\) (upper decile). In general, we can use the empirical quantile function \[\begin{eqnarray} \widehat{Q}(p) = \widehat{F}^{-1}(p), \end{eqnarray}\] to compute a quantile for any probability \(p\in [0,1]\). The median corresponds to \(p=0.5\), upper quartile to \(p=0.75\), and upper decile to \(p=0.9\).

Code
# common quantiles
X <- USArrests[,'Murder']
quantile(X)
##     0%    25%    50%    75%   100% 
##  0.800  4.075  7.250 11.250 17.400

# All deciles are quantiles
quantile(X, probs=seq(0,1, by=.1))
##    0%   10%   20%   30%   40%   50%   60%   70%   80%   90%  100% 
##  0.80  2.56  3.38  4.75  6.00  7.25  8.62 10.12 12.12 13.32 17.40

# Visualized: Inverting the Empirical Distribution
FX_hat <- ecdf(X)
plot(FX_hat, lwd=2, xlim=c(0,20),
    pch=16, col=grey(0,.5), main='')
# Two Examples of Quantiles 
p <- c(.25, .9) # Lower Quartile, Upper Decile
cols <- c(2,4) 
QX_hat <- quantile(X, p, type=1)
QX_hat
##  25%  90% 
##  4.0 13.2
segments(QX_hat, p, -10, p, col=cols)
segments(QX_hat, p, QX_hat, 0, col=cols)
mtext( round(QX_hat,2), 1, at=QX_hat, col=cols)

There are some issues with quantiles with smaller datasets. E.g., to compute the median of \(\{3,3.1,0,1\}\), we need some ways to break ties (of which there are many options). Similar issues arise for other quantiles, so quantiles are not used for such small datasets.

To calculate quantiles, the computer sorts the observations from smallest to largest as \(\hat{X}_{(1)}, \hat{X}_{(2)},... \hat{X}_{(N)}\), and then computes quantiles as \(\hat{X}_{ (q \times N) }\). Note that \((q \times N)\) is rounded and there are different ways to break ties.

Code
X <- USArrests[,'Murder']
Xo <- sort(X)
Xo
##  [1]  0.8  2.1  2.1  2.2  2.2  2.6  2.6  2.7  3.2  3.3  3.4  3.8  4.0  4.3  4.4
## [16]  4.9  5.3  5.7  5.9  6.0  6.0  6.3  6.6  6.8  7.2  7.3  7.4  7.9  8.1  8.5
## [31]  8.8  9.0  9.0  9.7 10.0 10.4 11.1 11.3 11.4 12.1 12.2 12.7 13.0 13.2 13.2
## [46] 14.4 15.4 15.4 16.1 17.4

# median
Xo[length(Xo)*.5]
## [1] 7.2
quantile(X, probs=.5, type=4) #tie break rule #4
## 50% 
## 7.2

# min
Xo[1]
## [1] 0.8
min(Xo)
## [1] 0.8
quantile(Xo, probs=0)
##  0% 
## 0.8

Boxplots.

Boxplots also summarize the distribution. The boxplot shows the median (solid black line) and interquartile range (\(IQR=\) upper quartile \(-\) lower quartile; filled box).3 As a default, whiskers are shown as \(1.5\times IQR\) and values beyond that are highlighted as outliers (so whiskers do not typically show the data range). You can alternatively show all the raw data points instead of whisker+outliers.

Code
boxplot(USArrests[,'Murder'],
    main='', ylab='Murder Arrests',
    whisklty=0, staplelty=0, outline=F)
# Raw Observations
stripchart(USArrests[,'Murder'],
    pch='-', col=grey(0,.5), cex=2,
    vert=T, add=T)

3.4 Further Reading

ECDF

Handling Strings


  1. We will not cover the statistical analysis of text in this course, but strings are amenable to statistical analysis.↩︎

  2. If \(L\) distinct bins exactly span the range, then \(h=[\text{max}(\hat{X}_{i}) - \text{min}(\hat{X}_{i})]/L\) and \(x\in \left\{ \frac{\ell h}{2} + \text{min}(\hat{X}_{i}) \right\}_{\ell=1}^{L}\).↩︎

  3. Technically, the upper and lower hinges use two different versions of the first and third quartile. See https://stackoverflow.com/questions/40634693/lower-and-upper-quartiles-in-boxplot-in-r.↩︎