-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 15.4 KB
/
index.json
1
[{"authors":null,"categories":null,"content":"In 2021, I was awarded my Statistics Ph.D. from UCLA, receiving funding via the NSF Graduate Research Fellowship from 2019 onward. Broadly, my research focused on leveraging statistical methods to more credibly infer treatment effects and causal relationships.\nAs a former statistical consultant, I am experienced in applying statistical and causal models to assess real-world impact. I earned my BS in Statistics from UCLA in 2017. In 2018, I co-founded the Society of Women in Statistics at UCLA, recognized and funded by UCLA and the Statistics Department.\n","date":-62135596800,"expirydate":-62135596800,"kind":"section","lang":"en","lastmod":-62135596800,"objectID":"598b63dd58b43bce02403646f240cd3c","permalink":"https://skahmann3.github.io/author/admin/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/admin/","section":"author","summary":"In 2021, I was awarded my Statistics Ph.D. from UCLA, receiving funding via the NSF Graduate Research Fellowship from 2019 onward. Broadly, my research focused on leveraging statistical methods to more credibly infer treatment effects and causal relationships.\nAs a former statistical consultant, I am experienced in applying statistical and causal models to assess real-world impact. I earned my BS in Statistics from UCLA in 2017. In 2018, I co-founded the Society of Women in Statistics at UCLA, recognized and funded by UCLA and the Statistics Department.","tags":null,"title":"","type":"author"},{"authors":null,"categories":null,"content":"","date":-62135596800,"expirydate":-62135596800,"kind":"section","lang":"en","lastmod":-62135596800,"objectID":"d41d8cd98f00b204e9800998ecf8427e","permalink":"https://skahmann3.github.io/author/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/","section":"author","summary":"","tags":null,"title":"Authors","type":"author"},{"authors":["S Kahmann"],"categories":null,"content":"","date":1631257200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1631257200,"objectID":"0c8dc67b4571be181a714210253673ef","permalink":"https://skahmann3.github.io/publication/balance/","publishdate":"2021-09-10T00:00:00-07:00","relpermalink":"/publication/balance/","section":"publication","summary":"To justify the credibility of their causal designs, researchers are increasingly reporting the results of falsification analyses on the observable implications of their necessary causal assumptions. Traditional hypothesis testing procedures for these purposes are improperly formulated, therefore this work contributes to the growing body of research promoting equivalence-based tests for the falsification of causal assumptions (e.g. Hartman and Hidalgo, 2018; Hartman, 2020; Bilinski and Hatfield, 2018). To this end, I first present an empirical application with an emphasis on falsification testing (Chapter 2). In this chapter, augmented synthetic control methods are used to estimate the causal effect of a Los Angeles police reform policy, supplemented with falsification analyses to both strengthen the credibility of the design decisions and contextualize the significance of the results in context. The remainder of this work proposes equivalence-based tests. Chapter 3 develops an equivalence test for conditional independence hypotheses, paying particular attention to the interpretability and specification of the necessary equivalence range. Many causal designs require the conditional ignorability assumption, and thus balance testing is proposed as an application of this method. Chapter 4 takes a broader view of causal assumption evaluation. For a suite of common causal designs (selection on observables, mediation, difference-in-differences, regression discontinuity, synthetic control, and instrumental variables), I map the necessary causal assumptions to the testable observable implications, ultimately contrasting traditional falsification approaches to proposed equivalence-based alternatives. To conclude, Chapter 5 discusses the implications and limitations of this work with an emphasis on open questions in causal assumption evaluation.","tags":[],"title":"Falsification Testing for Causal Design Assumptions","type":"publication"},{"authors":[],"categories":null,"content":"","date":1613721600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1613721600,"objectID":"f7275002f935ad6ca87d44e044220a65","permalink":"https://skahmann3.github.io/talk/asa-csp-2021/","publishdate":"2021-02-19T00:00:00-08:00","relpermalink":"/talk/asa-csp-2021/","section":"talk","summary":"Amidst the push for data-driven decision making, policymakers increasingly rely on statisticians to evaluate program effectiveness before allocating additional resources to policy expansion. To estimate the effect of a policy, one must infer what would have happened to the treated unit had it not received treatment. This causal inference problem is further complicated by the hallmarks of many policy problems: observational data, few or one treated unit(s), site-selection bias, and an imperfect pool of naturally-occurring controls. We introduce synthetic control methods, an important advancement that aims to alleviate these problems by estimating a synthetic control, a combination of control units constructed to mirror the treated unit in terms of pre-treatment characteristics. However, with so few treated units, researchers must carefully justify model-based decisions and quantify uncertainty in communicating final results to clients. Using a recent application in community policing, we implement the augmented synthetic control method and demonstrate how falsification tests can supplement model output to contextualize the substantive significance of results.","tags":[],"title":"Evaluating Policy and Quantifying Uncertainty with Few (or One) Treated Unit(s): An Introduction to Synthetic Control Methods and Falsification Analyses","type":"talk"},{"authors":["S Kahmann","E Hartman","J Leap","P.J. Brantingham"],"categories":null,"content":"","date":1601276400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1601276400,"objectID":"b2cc9d1bfd55a9f1ccc072e1c159357a","permalink":"https://skahmann3.github.io/publication/csp/","publishdate":"2020-09-28T00:00:00-07:00","relpermalink":"/publication/csp/","section":"publication","summary":"In 2011, the Los Angeles Police Department (LAPD), in conjunction with other governmental and nonprofit groups, launched the Community Safety Partnership (CSP) in several communities long impacted by multi-generational gangs, violent crime and a heavy-handed approach to crime suppression. Following a relationship-based policing model, officers were assigned to work collaboratively with community members to reduce crime and build trust. However, evaluating the causal impact of this policy intervention is difficult given the unique nature of the units and time period where CSP was implemented. In this paper, we use a novel data set based on the LAPD's reported crime incidents and calls-for-service to evaluate the effectiveness of this program via augmented synthetic control models, a cutting-edge method for policy evaluation. We perform falsification analyses to evaluate the robustness of the results. In the public housing developments where it was first deployed, CSP reduced reported violent crime incidents, shots fired and violent crime calls, and Part I reported crime incidents. We do not find evidence of crime displacement from CSP regions to neighboring control regions. These results are promising for policy-makers interested in policing reform.","tags":[],"title":"Impact Evaluation of the LAPD Community Safety Partnership","type":"publication"},{"authors":[],"categories":null,"content":"","date":1594710000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1594710000,"objectID":"bce45c5121cf4317c7981fc925fe3bee","permalink":"https://skahmann3.github.io/talk/polmeth2020/","publishdate":"2020-07-14T00:00:00-07:00","relpermalink":"/talk/polmeth2020/","section":"talk","summary":"In 2011, the Los Angeles Police Department (LAPD), in conjunction with other governmental and nonprofit groups, launched the Community Safety Partnership (CSP). Through this community policing program, officers are assigned to work alongside community members in Los Angeles public housing developments to reduce crime by building relationships and trust within these communities. In this paper, we use LAPD reported crime incidents and calls for service data to evaluate the effectiveness of this program via augmented synthetic control models. Extensive falsification analyses are provided to evaluate the robustness of the results. CSP is shown to have reduced both reported violent crime incidents as well as shots fired and violent crime calls starting roughly three years post CSP-implementation. We do not find evidence of crime displacement from CSP regions to neighboring control regions. ","tags":[],"title":"Impact Evaluation of the LAPD Community Safety Partnership","type":"talk"},{"authors":[],"categories":null,"content":"","date":1505372400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1505372400,"objectID":"301be98d5bd8debf483ed85a33cbb369","permalink":"https://skahmann3.github.io/talk/brdis/","publishdate":"2017-09-14T00:00:00-07:00","relpermalink":"/talk/brdis/","section":"talk","summary":"Multiple imputation in business establishment surveys like BRDIS, an annual business survey in which some companies are sampled every year or multiple years, may enhance the estimates of total R\u0026D in addition to helping researchers estimate models with subpopulations of small sample size. Considering a panel of BRDIS companies throughout the years 2008 to 2013 linked to LBD data, this paper uses the conclusions obtained with missing data visualization and other explorations to come up with a strategy to conduct multiple imputation appropriate to address the item nonresponse in R\u0026D expenditures. Because survey design characteristics are behind much of the item and unit nonresponse, multiple imputation of missing data in BRDIS changes the estimates of total R\u0026D significantly and alters the conclusions reached by models of the determinants of R\u0026D investment obtained with complete case analysis.","tags":[],"title":"R\u0026D, Attrition and Multiple Imputation in BRDIS","type":"talk"},{"authors":["J Sanchez","S Kahmann"],"categories":null,"content":"","date":1485932400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1485932400,"objectID":"1dc67ab3fda2c63a917d21eea2dac141","permalink":"https://skahmann3.github.io/publication/brdis/","publishdate":"2017-02-01T00:00:00-07:00","relpermalink":"/publication/brdis/","section":"publication","summary":"Multiple imputation in business establishment surveys like BRDIS, an annual business survey in which some companies are sampled every year or multiple years, may enhance the estimates of total R\u0026D in addition to helping researchers estimate models with subpopulations of small sample size. Considering a panel of BRDIS companies throughout the years 2008 to 2013 linked to LBD data, this paper uses the conclusions obtained with missing data visualization and other explorations to come up with a strategy to conduct multiple imputation appropriate to address the item nonresponse in R\u0026D expenditures. Because survey design characteristics are behind much of the item and unit nonresponse, multiple imputation of missing data in BRDIS changes the estimates of total R\u0026D significantly and alters the conclusions reached by models of the determinants of R\u0026D investment obtained with complete case analysis.","tags":["Multiple Imputation","R\u0026D","Attrition","Unit Nonresponse","Item Nonresponse","MICE"],"title":"R\u0026D, Attrition and Multiple Imputation in BRDIS","type":"publication"},{"authors":[],"categories":null,"content":"","date":1470812400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1470812400,"objectID":"f985049fe55f17ddf5cc17afcde41c39","permalink":"https://skahmann3.github.io/talk/umbc/","publishdate":"2016-08-10T00:00:00-07:00","relpermalink":"/talk/umbc/","section":"talk","summary":"Predictions of climate variables like precipitation and maximum/minimum temperatures play crucial role in assessing the impact of decadal climate changes on regional water availability. This technical report describes a Graphical User Interface(GUI) called CMIViz developed as part of the 2016 REU program at UMBC. CMIViz is an R tool used for exploration and visualization of spatio-temporal climate data from the Missouri River Basin (MRB). The tool is developed using the R package ‘Shiny’, which facilitates access on a web browser. Since prediction of precipitation is more challenging than the prediction of maximum/minimum temperatures, CMIViz provides more visualization options for precipitation. Specifically, the tool provides an easy intercomparison of data from the Global Climate Models (GCM): MIROC5, HadCM3, and NCAR-CCSM4 in terms of bias relative to the observed data, root mean-squared error (RMSE), and other measures of interest for daily precipitation. The tool has options to explore the temporal trends and autocorrelation patterns given a location and spatial patterns using contour plots, surface plots, and semivariograms given a time point. CMIViz also provides visualization of canonical correlation analysis (CCA) to help find similarities between the models.","tags":[],"title":"Enhanced Data Exploration and Visualization Tool for Large Spatio-Temporal Climate Data","type":"talk"},{"authors":["E Crasto","S Kahmann","P Rodriguez","B Smith","SK Popuri","N Wijekoon","NK Neerchal"],"categories":null,"content":"","date":1470009600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1470009600,"objectID":"b0132eb0c472642e6ef81cf2538ec561","permalink":"https://skahmann3.github.io/publication/cmiviz/","publishdate":"2016-08-01T00:00:00Z","relpermalink":"/publication/cmiviz/","section":"publication","summary":"Predictions of climate variables like precipitation and maximum/minimum temperatures play crucial role in assessing the impact of decadal climate changes on regional water availability. This technical report describes a Graphical User Interface(GUI) called CMIViz developed as part of the 2016 REU program at UMBC. CMIViz is an R tool used for exploration and visualization of spatio-temporal climate data from the Missouri River Basin (MRB). The tool is developed using the R package ‘Shiny’, which facilitates access on a web browser. Since prediction of precipitation is more challenging than the prediction of maximum/minimum temperatures, CMIViz provides more visualization options for precipitation. Specifically, the tool provides an easy intercomparison of data from the Global Climate Models (GCM): MIROC5, HadCM3, and NCAR-CCSM4 in terms of bias relative to the observed data, root mean-squared error (RMSE), and other measures of interest for daily precipitation. The tool has options to explore the temporal trends and autocorrelation patterns given a location and spatial patterns using contour plots, surface plots, and semivariograms given a time point. CMIViz also provides visualization of canonical correlation analysis (CCA) to help find similarities between the models.","tags":["Graphical user interface (GUI)","Global Climate Models (GCM)","Missouri River Basin (MRB)","spatio-temporal analysis","exploratory data analysis (EDA)","MIROC5","HadCM3","NCAR-CCSM4"],"title":"Enhanced Data Exploration and Visualization Tool for Large Spatio-Temporal Climate Data","type":"publication"}]