Skip to content

Commit

Permalink
Clean up repo (#30)
Browse files Browse the repository at this point in the history
* Added Rglpk as a dependency

* Removed tidyverse as package loading

* Added mvtnorm as dependency

* Squashed commit of the following:

commit 3c3c4c53588b390f7ca8d5c78d5dff5cae7dc5ec
Author: Mauricio Caceres <mauricio.caceres.bravo@gmail.com>
Date:   Wed Apr 5 14:53:19 2023 -0400

    More explicit calls

commit 9d01fe1
Author: Mauricio Caceres <mauricio.caceres.bravo@gmail.com>
Date:   Sat Apr 1 20:58:22 2023 -0400

    Re-compiled from R studio; closes #24

commit b8cf9f2
Author: Mauricio Caceres <mauricio.caceres.bravo@gmail.com>
Date:   Sat Apr 1 20:42:14 2023 -0400

    Added explicit calls to built-ins

* Squashed commit of the following:

commit b9d8af8
Author: Mauricio Caceres <mauricio.caceres.bravo@gmail.com>
Date:   Thu May 11 01:07:47 2023 -0400

    Moved intro of staggered example into README

commit 02b541b
Author: Mauricio Caceres <mauricio.caceres.bravo@gmail.com>
Date:   Thu May 11 01:01:11 2023 -0400

    Fixed potential integer overflow

commit b4767b7
Author: Mauricio Caceres <mauricio.caceres.bravo@gmail.com>
Date:   Thu May 11 00:56:23 2023 -0400

    Cleaned and moved honest_did function to own file

* Several fixes/edits to pass CRAN check

- Added matrixStats to Imports
- Deleted ROI
- Renamed doc -> vignettes
- .compute_IDset_DeltaRMB_fixedS: biasDrection -> biasDirection
- .create_A_M: A_I -> A_M
- DeltaSD_upperBound_Mpre: stata -> stats
- Changed title to Title Case
- https://jonathandroth.github.io -> https://www.jonathandroth.com/
- Aligned man files with function definitions:
    - findOptimalFLCI
    - createSensitivityResults_relativeMagnitudes
    - computeConditionalCS_DeltaSDRMB
    - computeConditionalCS_DeltaSDM
    - computeConditionalCS_DeltaSDB
    - computeConditionalCS_DeltaSD
- Added deltaSD.png, README to .Rbuildignore
- Added VignetteBuilder and knitr to DESCRIPTION

* Fixed Ashesh e-mail; removed library statements

* Bumped version #

* Added rmarkdown to build vignettes

* Deleted keyword placeholders; moved doParallel to suggested packages
  • Loading branch information
mcaceresb authored May 30, 2023
1 parent 2b26a8b commit a1e7126
Show file tree
Hide file tree
Showing 46 changed files with 91 additions and 204 deletions.
2 changes: 2 additions & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
^.*\.Rproj$
^\.Rproj\.user$
README
deltaSD.png
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
.Rproj.user
*.Rcheck
.Rhistory
.RData
.Ruserdata
Expand Down
22 changes: 14 additions & 8 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,29 +1,35 @@
Package: HonestDiD
Type: Package
Title: Robust inference in difference-in-differences and event study designs
Version: 0.2.2
Title: Robust Inference in Difference-in-Differences and Event Study Designs
Version: 0.2.3
Depends:
R (>= 3.6.0)
Imports:
stats,
foreach (>= 1.4.7),
doParallel (>= 1.0.15),
matrixStats (>= 0.63.0),
CVXR (>= 0.99-6),
latex2exp (>= 0.4.0),
lpSolveAPI (>= 5.5.2.0-17),
Matrix (>= 1.2-17),
pracma (>= 2.2.5),
purrr (>= 0.3.4),
ROI (>= 0.3-2),
tibble (>= 1.3.4),
dplyr (>= 0.7.4),
ggplot2 (>= 2.2.1),
Rglpk (>= 0.6-4),
mvtnorm (>= 1.1-3),
TruncatedNormal (>= 1.0)
Author: Ashesh Rambachan
Maintainer: Ashesh Rambachan <asheshr@g.harvard.edu>
Description:
This package provides functions to conduct robust inference in difference-in-differences and event study designs by implementing the methods developed in Rambachan & Roth (2021). Inference is conducted under a weaker version of the parallel trends assumption. Uniformly valid confidence sets are constructed based upon conditional confidence sets, fixed-length confidence sets and hybridized confidence sets. See Ashesh Rambachan & Jonathan Roth, "An Honest Approach to Parallel Trends", 2021 for details on the methods.
Suggests:
doParallel (>= 1.0.15),
knitr,
rmarkdown
Author: Ashesh Rambachan <ashesh.a.rambachan@gmail.com>
Maintainer: Ashesh Rambachan <ashesh.a.rambachan@gmail.com>
Description:
Provides functions to conduct robust inference in difference-in-differences and event study designs by implementing the methods developed in Rambachan & Roth (2021). Inference is conducted under a weaker version of the parallel trends assumption. Uniformly valid confidence sets are constructed based upon conditional confidence sets, fixed-length confidence sets and hybridized confidence sets. See Ashesh Rambachan & Jonathan Roth, "An Honest Approach to Parallel Trends", 2021 for details on the methods.
License: GPL-3
Encoding: UTF-8
LazyData: true
VignetteBuilder:
knitr
1 change: 0 additions & 1 deletion R/HonestDiD-Temp.R
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
Expand Down
11 changes: 1 addition & 10 deletions R/arp-nuisance.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@
# This script contains functions that are used to construct
# the ARP test with nuisance parameters.

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)

# ARP HELPER FUNCTIONS ------------------------------------------------
.norminvp_generalized <- function(p, l, u, mu = 0, sd = 1){
lnormalized <- (l-mu)/sd
Expand Down Expand Up @@ -90,7 +81,7 @@ library(foreach)

if (base::is.na(checksol) || !checksol) {
# warning('User-supplied eta is not a solution. Not rejecting automatically')
base::return( list(vlo = eta, vup = Inf) )
base::return( base::list(vlo = eta, vup = Inf) )
}

### Compute vup ###
Expand Down
2 changes: 1 addition & 1 deletion R/delta_utility_functions.R
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
# If postPeriodMomentsOnly == T, exclude moments that only involve pre-periods
if(postPeriodMomentsOnly){
postPeriodIndices <- (numPrePeriods +1):base::NCOL(A_M)
prePeriodOnlyRows <- base::which( base::rowSums( A_I[ , postPeriodIndices] != 0 ) == 0 )
prePeriodOnlyRows <- base::which( base::rowSums( A_M[ , postPeriodIndices] != 0 ) == 0 )
A_M <- A_M[-prePeriodOnlyRows , ]
}
if (monotonicityDirection == "decreasing") {
Expand Down
10 changes: 0 additions & 10 deletions R/deltarm.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,6 @@
# This script contains functions that are used to construct
# the confidence sets for Delta^{RM}(Mbar).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)
library(purrr)

# Delta^{RM} functions -----------------------------------------------
.create_A_RM <- function(numPrePeriods, numPostPeriods,
Mbar = 1, s, max_positive = T,
Expand Down
12 changes: 1 addition & 11 deletions R/deltarmb.R
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,6 @@
# with a sign restriction (i.e., Delta^{PB} or Delta^{NB}). See the discussion
# Section 2.3.3 of Rambachan and Roth (2021).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)
library(purrr)

# Delta^{RMB} functions -----------------------------------------------
.create_A_RMB <- function(numPrePeriods, numPostPeriods,
Mbar = 1, s, max_positive = T,
Expand Down Expand Up @@ -107,7 +97,7 @@ library(purrr)

# Create A_RM, d_RM for this choice of s, max_positive
A_RMB_s = .create_A_RMB(numPrePeriods = numPrePeriods, numPostPeriods = numPostPeriods,
Mbar = Mbar, s = s, max_positive = max_positive, biasDirection = biasDrection)
Mbar = Mbar, s = s, max_positive = max_positive, biasDirection = biasDirection)
d_RMB = .create_d_RMB(numPrePeriods = numPrePeriods, numPostPeriods = numPostPeriods)

# Create vector for direction of inequalities associated with RM
Expand Down
10 changes: 0 additions & 10 deletions R/deltarmm.R
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,6 @@
# the confidence sets for Delta^{RM}(Mbar), which intersects Delta^{RM}(Mbar)
# with a shape restriction (i.e., Delta^{I} or Delta^{D}).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)
library(purrr)

# Delta^{RMM} functions -----------------------------------------------
.create_A_RMM <- function(numPrePeriods, numPostPeriods,
Mbar = 1, s, max_positive = T,
Expand Down
18 changes: 5 additions & 13 deletions R/deltasd.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@
# This script contains functions that are used to construct
# the confidence sets for Delta^{SD}(M).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)

# DELTA^{SD}(M) FUNCTIONS ---------------------------------------------
# In this section, we implement helper functions to place testing with
# Delta^{SD}(M) into the form needed to use the ARP functions.
Expand Down Expand Up @@ -136,10 +127,11 @@ library(foreach)
}

computeConditionalCS_DeltaSD <- function(betahat, sigma, numPrePeriods, numPostPeriods,
l_vec = .basisVector(index = 1, size = numPostPeriods), M = 0,
alpha = 0.05, hybrid_flag = "FLCI", hybrid_kappa = alpha/10,
returnLength = F, postPeriodMomentsOnly = T,
gridPoints =10^3, grid.midPoint = NA, grid.ub = NA, grid.lb = NA) {
l_vec = .basisVector(index = 1, size = numPostPeriods),
M = 0, alpha = 0.05, hybrid_flag = "FLCI",
hybrid_kappa = alpha/10, returnLength = F,
postPeriodMomentsOnly = T,
gridPoints =10^3, grid.ub = NA, grid.lb = NA) {
# This function computes the ARP CI that includes nuisance parameters
# for Delta^{SD}(M). This functions uses ARP_computeCI for all
# of its computations.
Expand Down
20 changes: 4 additions & 16 deletions R/deltasdb.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@
# This script contains functions that are used to construct
# the confidence sets for Delta^{SDB}(M).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)

# DELTA^{SDB}(M) FUNCTIONS --------------------------------------------
# In this section, we implement helper functions to place testing with
# Delta^{SDB}(M) into the form needed to use the ARP functions.
Expand Down Expand Up @@ -118,15 +109,12 @@ library(foreach)
id.ub = id.ub))
}

computeConditionalCS_DeltaSDB <- function(betahat, sigma, numPrePeriods, numPostPeriods, M = 0,
l_vec = .basisVector(index = 1, size=numPostPeriods),
alpha = 0.05,
hybrid_flag = "FLCI", hybrid_kappa = alpha/10,
computeConditionalCS_DeltaSDB <- function(betahat, sigma, numPrePeriods, numPostPeriods,
M = 0, l_vec = .basisVector(index = 1, size=numPostPeriods),
alpha = 0.05, hybrid_flag = "FLCI", hybrid_kappa = alpha/10,
returnLength = F, biasDirection = "positive",
postPeriodMomentsOnly = T,
gridPoints = 10^3,
grid.lb = NA,
grid.ub = NA) {
gridPoints = 10^3, grid.lb = NA, grid.ub = NA) {
# This function computes the ARP CI that includes nuisance parameters
# for Delta^{SDPB}(M). This functions uses ARP_computeCI for all
# of its computations.
Expand Down
17 changes: 3 additions & 14 deletions R/deltasdm.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@
# This script contains functions that are used to construct
# the confidence sets for Delta^{SDM}(M).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)

# DELTA^{SDM}(M) FUNCTIONS --------------------------------------------
# In this section, we implement helper functions to place testing with
# Delta^{SDM}(M) into the form needed to use the ARP functions.
Expand Down Expand Up @@ -107,14 +98,12 @@ library(foreach)
id.ub = id.ub))
}

computeConditionalCS_DeltaSDM <- function(betahat, sigma, numPrePeriods, numPostPeriods, M = 0,
l_vec = .basisVector(index = 1, size = numPostPeriods),
computeConditionalCS_DeltaSDM <- function(betahat, sigma, numPrePeriods, numPostPeriods,
M = 0, l_vec = .basisVector(index = 1, size = numPostPeriods),
alpha = 0.05, monotonicityDirection = "increasing",
hybrid_flag = "FLCI", hybrid_kappa = alpha/10,
returnLength = F, postPeriodMomentsOnly = T,
gridPoints=10^3,
grid.lb = NA,
grid.ub = NA) {
gridPoints=10^3, grid.lb = NA, grid.ub = NA) {
# This function computes the ARP CI that includes nuisance parameters
# for Delta^{SDI}(M). This functions uses ARP_computeCI for all
# of its computations.
Expand Down
10 changes: 0 additions & 10 deletions R/deltasdrm.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,6 @@
# This script contains functions that are used to construct
# the confidence sets for Delta^{SDRM}(Mbar).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)
library(purrr)

# DELTA^{SDRM}(Mbar) FUNCTIONS ---------------------------------------------
# In this section, we implement helper functions to place testing with
# Delta^{SDRM}(Mbar) into the form needed to use the ARP functions.
Expand Down
17 changes: 4 additions & 13 deletions R/deltasdrmb.R
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,6 @@
# with a sign restriction (i.e., Delta^{PB} or Delta^{NB}). See the discussion
# Section 2.3.3 of Rambachan and Roth (2021).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)
library(purrr)

# DELTA^{SDRMB}(Mbar) FUNCTIONS ---------------------------------------------
# In this section, we implement helper functions to place testing with
# Delta^{SDRMB}(Mbar) into the form needed to use the ARP functions.
Expand Down Expand Up @@ -260,9 +250,10 @@ library(purrr)
}

computeConditionalCS_DeltaSDRMB <- function(betahat, sigma, numPrePeriods, numPostPeriods,
l_vec = .basisVector(index = 1, size = numPostPeriods), Mbar = 0,
alpha = 0.05, hybrid_flag = "LF", hybrid_kappa = alpha/10,
returnLength = F, postPeriodMomentsOnly = T, biasDirection = "positive",
l_vec = .basisVector(index = 1, size = numPostPeriods),
Mbar = 0, alpha = 0.05, hybrid_flag = "LF",
hybrid_kappa = alpha/10, returnLength = F,
postPeriodMomentsOnly = T, biasDirection = "positive",
gridPoints = 10^3, grid.ub = NA, grid.lb = NA) {
# This function computes the ARP CI that includes nuisance parameters
# for Delta^{SDRMB}(Mbar). This functions uses ARP_computeCI for all
Expand Down
10 changes: 0 additions & 10 deletions R/deltasdrmm.R
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,6 @@
# the confidence sets for Delta^{SDRMM}(Mbar), which intersects Delta^{SDRM}(Mbar)
# with a shape restriction (i.e., Delta^{I} or Delta^{D}).

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)
library(purrr)

# DELTA^{SDRMM}(Mbar) FUNCTIONS ---------------------------------------------
# In this section, we implement helper functions to place testing with
# Delta^{SDRMM}(Mbar) into the form needed to use the ARP functions.
Expand Down
9 changes: 0 additions & 9 deletions R/flci.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@
# This script contains functions that are used to construct
# the FLCI for a general choice of vector l and Delta = Delta^{SD}(M)

# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)

# FLCI HELPER FUNCTIONS -----------------------------------------------
.createConstraints_AbsoluteValue <- function(sigma, numPrePeriods, UstackW){
# This function creates linear constraints that help to minimize worst-case bias
Expand Down
19 changes: 7 additions & 12 deletions R/sensitivityresults.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@
# Implements functions to perform sensitivity analysis on event study coefficients


# PRELIMINARIES =======================================================
library(TruncatedNormal)
library(lpSolveAPI)
library(ROI)
library(Matrix)
library(pracma)
library(CVXR)
library(foreach)

# Construct Robust Results Function for Smoothness Restrictions -----------------------------------
createSensitivityResults <- function(betahat, sigma,
numPrePeriods, numPostPeriods,
Expand All @@ -31,7 +22,7 @@ createSensitivityResults <- function(betahat, sigma,
# If Mvec is null, construct default Mvec
if (base::is.null(Mvec)) {
if (numPrePeriods == 1) {
Mvec = base::seq(from = 0, to = base::c(base::sqrt(base::sigma[1, 1])), length.out = 10)
Mvec = base::seq(from = 0, to = base::c(base::sqrt(sigma[1, 1])), length.out = 10)
} else {
Mub = DeltaSD_upperBound_Mpre(betahat = betahat, sigma = sigma, numPrePeriods = numPrePeriods, alpha = 0.05)
Mvec = base::seq(from = 0, to = Mub, length.out = 10)
Expand Down Expand Up @@ -420,7 +411,9 @@ createSensitivityResults_relativeMagnitudes <- function(betahat, sigma,
monotonicityDirection = NULL,
biasDirection = NULL,
alpha = 0.05,
gridPoints = 10^3, grid.ub = NA, grid.lb = NA,
gridPoints = 10^3,
grid.ub = NA,
grid.lb = NA,
parallel = FALSE) {

# If Mbarvec is null, construct default Mbarvec to be 10 values on [0,2].
Expand Down Expand Up @@ -681,7 +674,9 @@ createSensitivityResults_relativeMagnitudes <- function(betahat, sigma,
base::return(Results)
}

createSensitivityPlot_relativeMagnitudes <- function(robustResults, originalResults, rescaleFactor = 1, maxMbar = Inf, add_xAxis = TRUE) {
createSensitivityPlot_relativeMagnitudes <- function(robustResults, originalResults,
rescaleFactor = 1, maxMbar = Inf,
add_xAxis = TRUE) {
# Set Mbar for OLS to be the min Mbar in robust results minus the gap between Mbars in robust
Mbargap <- base::min( base::diff( base::sort( robustResults$Mbar) ) )
Mbarmin <- base::min( robustResults$Mbar)
Expand Down
Loading

0 comments on commit a1e7126

Please sign in to comment.