In econometrics, fixed effects models are popular to control for unobserved heterogeneity in data sets with a panel structure. In non-linear models this is usually done by including a dummy variable for each level of a fixed effects category. This approach can quickly become infeasible if the number of levels and/or fixed effects categories increases (either due to memory or time limitations).
Two examples of this limitation can be found in the literature on
international trade. Firstly, when it comes to estimating the
determinants of trade volumes (intensive margin) or secondly, when it
comes to the export decisions (extensive margin). In order to
demonstrate the functionality and capabilities of alpaca
,
we replicate parts of Larch et al. (2019)
as an example for the intensive margin and extend it to the extensive
margin. Our example is based on the panel data set of 2,973,168
bilateral trade flows used by Glick and Rose
(2016). For our estimations we specify a three-way error
component with 56,608 levels.
The data set is available either from Andrew Rose’s website or from sciencedirect. We use the same variable names as Glick and Rose (2016) such that we are able to compare our summary statistics with the ones provided in their Stata log-files.
# Required packages
library(alpaca)
library(data.table)
library(haven)
# Import the data set
<- read_dta("dataaxj1.dta")
cudata setDT(cudata)
# Subset relevant variables
<- c(
var.nms "exp1to2", "custrict11", "ldist", "comlang", "border", "regional",
"comcol", "curcol", "colony", "comctry", "cuwoemu", "emu", "cuc",
"cty1", "cty2", "year", "pairid"
)<- cudata[, var.nms, with = FALSE]
cudata
# Generate identifiers required for structural gravity
:= factor(pairid)]
cudata[, pairid := interaction(cty1, year)]
cudata[, exp.time := interaction(cty2, year)]
cudata[, imp.time
# Generate dummies for disaggregated currency unions
:= as.numeric(cuc == "au")]
cudata[, cuau := as.numeric(cuc == "be")]
cudata[, cube := as.numeric(cuc == "ca")]
cudata[, cuca := as.numeric(cuc == "cf")]
cudata[, cucf := as.numeric(cuc == "cp")]
cudata[, cucp := as.numeric(cuc == "dk")]
cudata[, cudk := as.numeric(cuc == "ea")]
cudata[, cuea := as.numeric(cuc == "ec")]
cudata[, cuec := as.numeric(cuc == "em")]
cudata[, cuem := as.numeric(cuc == "fr")]
cudata[, cufr := as.numeric(cuc == "gb")]
cudata[, cugb := as.numeric(cuc == "in")]
cudata[, cuin := as.numeric(cuc == "ma")]
cudata[, cuma := as.numeric(cuc == "ml")]
cudata[, cuml := as.numeric(cuc == "nc")]
cudata[, cunc := as.numeric(cuc == "nz")]
cudata[, cunz := as.numeric(cuc == "pk")]
cudata[, cupk := as.numeric(cuc == "pt")]
cudata[, cupt := as.numeric(cuc == "sa")]
cudata[, cusa := as.numeric(cuc == "sp")]
cudata[, cusp := as.numeric(cuc == "ua")]
cudata[, cuua := as.numeric(cuc == "us")]
cudata[, cuus := as.numeric(cuc == "wa")]
cudata[, cuwa := custrict11]
cudata[, cuwoo %in% c("em", "au", "cf", "ec", "fr", "gb", "in", "us"), cuwoo := 0]
cudata[cuc
# Set missing trade flows to zero
is.na(exp1to2), exp1to2 := 0]
cudata[
# Re-scale trade flows
:= exp1to2 / 1000]
cudata[, exp1to2
# Construct binary and lagged dependent variable for the extensive margin
:= as.numeric(exp1to2 > 0)]
cudata[, trade := shift(trade), by = pairid] cudata[, ltrade
After preparing the data, we show how to replicate column 3 of table 2 in Larch et al. (2019). In addition to coefficients and robust standard errors, the authors also report standard errors clustered by exporter, importer, and time.
If we want feglm()
to report standard errors that are
clustered by variables, which are not already part of the model itself,
we have to additionally provide them using the third part of the
formula
interface. In this example, we have to additionally
add identifiers for exporters (cty1
), importers
(cty2
), and time (year
).
First we report robust standard errors indicated by the option
"sandwich"
in summary()
.
<- feglm(
mod ~ emu + cuwoo + cuau + cucf + cuec + cufr + cugb + cuin + cuus +
exp1to2 + curcol | exp.time + imp.time + pairid | cty1 + cty2 + year,
regional data = cudata,
family = poisson()
)summary(mod, "sandwich")
## poisson - log link
##
## exp1to2 ~ emu + cuwoo + cuau + cucf + cuec + cufr + cugb + cuin +
## cuus + regional + curcol | exp.time + imp.time + pairid |
## cty1 + cty2 + year
##
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## emu 0.048895 0.010277 4.758 1.96e-06 ***
## cuwoo 0.765988 0.053272 14.379 < 2e-16 ***
## cuau 0.384469 0.118832 3.235 0.00121 **
## cucf -0.125608 0.099674 -1.260 0.20760
## cuec -0.877318 0.083451 -10.513 < 2e-16 ***
## cufr 2.095726 0.062952 33.291 < 2e-16 ***
## cugb 1.059957 0.034680 30.564 < 2e-16 ***
## cuin 0.169745 0.147029 1.154 0.24830
## cuus 0.018323 0.021530 0.851 0.39473
## regional 0.159181 0.008714 18.267 < 2e-16 ***
## curcol 0.386882 0.046827 8.262 < 2e-16 ***
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
##
## residual deviance= 35830.78,
## null deviance= 2245707.30,
## n= 1610165, l= [11227, 11277, 34104]
##
## ( 1363003 observation(s) deleted due to perfect classification )
##
## Number of Fisher Scoring Iterations: 20
We observe that roughly 1.4 million observations do not contribute to the identification of the structural parameters and we end up with roughly 1.6 million observations and 57,000 fixed effects.
Replicating the clustered standard errors is straightforward. We
simply have to change the type to "clustered"
and provide
summary
with the requested cluster dimensions.
summary(mod, type = "clustered", cluster = ~ cty1 + cty2 + year)
## poisson - log link
##
## exp1to2 ~ emu + cuwoo + cuau + cucf + cuec + cufr + cugb + cuin +
## cuus + regional + curcol | exp.time + imp.time + pairid |
## cty1 + cty2 + year
##
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## emu 0.04890 0.09455 0.517 0.60507
## cuwoo 0.76599 0.24933 3.072 0.00213 **
## cuau 0.38447 0.22355 1.720 0.08546 .
## cucf -0.12561 0.35221 -0.357 0.72137
## cuec -0.87732 0.29493 -2.975 0.00293 **
## cufr 2.09573 0.30625 6.843 7.75e-12 ***
## cugb 1.05996 0.23766 4.460 8.19e-06 ***
## cuin 0.16974 0.30090 0.564 0.57267
## cuus 0.01832 0.05092 0.360 0.71898
## regional 0.15918 0.07588 2.098 0.03593 *
## curcol 0.38688 0.15509 2.495 0.01261 *
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
##
## residual deviance= 35830.78,
## null deviance= 2245707.30,
## n= 1610165, l= [11227, 11277, 34104]
##
## ( 1363003 observation(s) deleted due to perfect classification )
##
## Number of Fisher Scoring Iterations: 20
Our package is also compatible with linearHypothesis()
of the car
package. In the next example we show how to test
if all currency union effects except being in the EMU are jointly
different from zero using a Wald test based on a clustered covariance
matrix.
library(car)
<- c("cuwoo", "cuau", "cucf", "cuec", "cufr", "cugb", "cuin", "cuus")
h0_cus linearHypothesis(
mod, h0_cus,vcov. = vcov(mod, "clustered", cluster = ~ cty1 + cty2 + year)
)
## Linear hypothesis test
##
## Hypothesis:
## cuwoo = 0
## cuau = 0
## cucf = 0
## cuec = 0
## cufr = 0
## cugb = 0
## cuin = 0
## cuus = 0
##
## Model 1: restricted model
## Model 2: exp1to2 ~ emu + cuwoo + cuau + cucf + cuec + cufr + cugb + cuin +
## cuus + regional + curcol | exp.time + imp.time + pairid |
## cty1 + cty2 + year
##
## Note: Coefficient covariance matrix supplied.
##
## Df Chisq Pr(>Chisq)
## 1
## 2 8 96.772 < 2.2e-16 ***
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Now we turn to the estimation of the extensive margin. First we estimate a static logit model.
<- feglm(
mods ~ cuwoemu + emu + regional | exp.time + imp.time + pairid,
trade data = cudata,
family = binomial("logit")
)summary(mods)
## binomial - logit link
##
## trade ~ cuwoemu + emu + regional | exp.time + imp.time + pairid
##
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## cuwoemu 0.41037 0.05103 8.041 8.89e-16 ***
## emu 0.55108 0.23080 2.388 0.01696 *
## regional 0.10193 0.03271 3.116 0.00183 **
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
##
## residual deviance= 674579.11,
## null deviance= 1917254.65,
## n= 1384892, l= [11150, 11218, 29391]
##
## ( 1588276 observation(s) deleted due to perfect classification )
##
## Number of Fisher Scoring Iterations: 11
To mitigate the incidental parameters problem (see Neyman and Scott (1948)) we apply the bias
correction suggested by Hinz, Stammann, and
Wanner (2020). Note that the argument
panel.structure = "network"
is necessary to apply the
appropriate bias correction.
<- biasCorr(mods, panel.structure = "network")
modsbc summary(modsbc)
## binomial - logit link
##
## trade ~ cuwoemu + emu + regional | exp.time + imp.time + pairid
##
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## cuwoemu 0.37630 0.05107 7.369 1.72e-13 ***
## emu 0.47213 0.23056 2.048 0.04058 *
## regional 0.09942 0.03270 3.040 0.00236 **
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
##
## residual deviance= 674579.11,
## null deviance= 1917254.65,
## n= 1384892, l= [11150, 11218, 29391]
##
## ( 1588276 observation(s) deleted due to perfect classification )
##
## Number of Fisher Scoring Iterations: 11
Because coefficients itself are not very meaningful, researchers are
usually interested in so called partial effects (also known as marginal
or ceteris paribus effects). A commonly used statistic is the average
partial effect. alpaca
offers a post-estimation routine to
estimate average partial effects and their corresponding standard
errors. In the following the bias-corrected average partial effects are
computed.
<- getAPEs(modsbc)
apesbc summary(apesbc)
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## cuwoemu 0.015554 0.001957 7.946 1.92e-15 ***
## emu 0.019567 0.008212 2.383 0.017185 *
## regional 0.004079 0.001189 3.429 0.000605 ***
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Hinz, Stammann, and Wanner (2020)
propose to estimate the extensive margin using a dynamic specification,
where the lagged dependent variable is added to the list of explanatory
variables. Again, a bias correction is necessary. Contrary to the one
for static models with strictly exogenous regressors, we need to
additionally provide a bandwidth parameter (L
) that is
required for the estimation of spectral densities (see Hahn and Kuersteiner (2011)). Fernández-Val and Weidner (2016) suggest to do a
sensitivity analysis and try different values for L
but not
larger than four. Note that in this case the order of factors to be
concentrated out, specified in the second part of the formula, is
important (importer-/exporter-time identifiers first and pair identifier
last).
<- feglm(
modd ~ ltrade + cuwoemu + emu + regional | exp.time + imp.time + pairid,
trade data = cudata,
family = binomial("logit")
)<- biasCorr(modd, L = 4, panel.structure = "network")
moddbc summary(moddbc)
## binomial - logit link
##
## trade ~ ltrade + cuwoemu + emu + regional | exp.time + imp.time +
## pairid
##
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## ltrade 2.166103 0.008003 270.663 < 2e-16 ***
## cuwoemu 0.268477 0.054482 4.928 8.31e-07 ***
## emu 0.312551 0.254091 1.230 0.2187
## regional 0.063077 0.035328 1.785 0.0742 .
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
##
## residual deviance= 602773.13,
## null deviance= 1899840.83,
## n= 1372471, l= [11054, 11116, 29299]
##
## ( 45048 observation(s) deleted due to missingness )
## ( 1555649 observation(s) deleted due to perfect classification )
##
## Number of Fisher Scoring Iterations: 11
Again we compute bias-corrected average partial effects to get meaningful quantities.
<- getAPEs(moddbc)
apedbc summary(apedbc)
## Estimates:
## Estimate Std. error z value Pr(> |z|)
## ltrade 0.1213377 0.0004681 259.231 < 2e-16 ***
## cuwoemu 0.0100223 0.0016486 6.079 1.21e-09 ***
## emu 0.0116944 0.0078456 1.491 0.1361
## regional 0.0023338 0.0010544 2.213 0.0269 *
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1