-
Notifications
You must be signed in to change notification settings - Fork 0
/
survival_analysis_in_r_tutorial.Rmd
executable file
·1431 lines (961 loc) · 47.7 KB
/
survival_analysis_in_r_tutorial.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "Survival Analysis in R"
output:
html_document:
toc: TRUE
toc_float: TRUE
---
<link rel="stylesheet" href="styles.css" type="text/css">
<link rel="stylesheet" href="academicicons/css/academicons.min.css"/>
This tutorial provides an introduction to survival analysis, and to conducting a survival analysis in R.
This tutorial was originally presented at the Memorial Sloan Kettering Cancer Center R-Presenters series on August 30, 2018.
It was then modified for a more extensive training at Memorial Sloan Kettering Cancer Center in March, 2019.
The full source code for this tutorial can be found at: [https://github.com/zabore/tutorials/blob/master/survival_analysis_in_r_tutorial.Rmd](https://github.com/zabore/tutorials/blob/master/survival_analysis_in_r_tutorial.Rmd)
```{r setup, include=FALSE}
# load packages
library(knitr)
library(tidyverse)
library(lubridate)
library(survival)
library(survminer)
# set output options
opts_chunk$set(fig.width = 5,
fig.height = 4
)
opts_knit$set(warning = FALSE,
message = FALSE)
```
# Part 1: Introduction to Survival Analysis
This presentation will cover some basics of survival analysis, and the following series tutorial papers can be helpful for additional reading:
> Clark, T., Bradburn, M., Love, S., & Altman, D. (2003). Survival analysis part I: Basic concepts and first analyses. 232-238. ISSN 0007-0920.
> M J Bradburn, T G Clark, S B Love, & D G Altman. (2003). Survival Analysis Part II: Multivariate data analysis – an introduction to concepts and methods. British Journal of Cancer, 89(3), 431-436.
> Bradburn, M., Clark, T., Love, S., & Altman, D. (2003). Survival analysis Part III: Multivariate data analysis -- choosing a model and assessing its adequacy and fit. 89(4), 605-11.
> Clark, T., Bradburn, M., Love, S., & Altman, D. (2003). Survival analysis part IV: Further concepts and methods in survival analysis. 781-786. ISSN 0007-0920.
## Packages
Some packages we'll be using today include:
- `lubridate`
- `survival`
- `survminer`
```{r}
library(survival)
library(survminer)
library(lubridate)
```
## What is survival data?
Time-to-event data that consist of a distinct start time and end time.
Examples from cancer
- Time from surgery to death
- Time from start of treatment to progression
- Time from response to recurrence
## Examples from other fields
Time-to-event data are common in many fields including, but not limited to
- Time from HIV infection to development of AIDS
- Time to heart attack
- Time to onset of substance abuse
- Time to initiation of sexual activity
- Time to machine malfunction
## Aliases for survival analysis
Because survival analysis is common in many other fields, it also goes by other names
- Reliability analysis
- Duration analysis
- Event history analysis
- Time-to-event analysis
## The lung dataset
The `lung` dataset is available from the `survival` package in `R`. The data contain subjects with advanced lung cancer from the North Central Cancer Treatment Group. Some variables we will use to demonstrate methods today include
- time: Survival time in days
- status: censoring status 1=censored, 2=dead
- sex: Male=1 Female=2
## What is censoring?
```{r trial_anatomy, echo = FALSE}
include_graphics(here::here("img", "trial_anatomy.png"))
```
> RICH JT, NEELY JG, PANIELLO RC, VOELKER CCJ, NUSSENBAUM B, WANG EW. A PRACTICAL GUIDE TO UNDERSTANDING KAPLAN-MEIER CURVES. Otolaryngology head and neck surgery: official journal of American Academy of Otolaryngology Head and Neck Surgery. 2010;143(3):331-336. doi:10.1016/j.otohns.2010.05.007.
## Types of censoring
A subject may be censored due to:
- Loss to follow-up
- Withdrawal from study
- No event by end of fixed study period
Specifically these are examples of **right** censoring.
Left censoring and interval censoring are also possible, and methods exist to analyze this type of data, but this training will be limited to right censoring.
## Censored survival data
```{r swimmer, echo = FALSE}
# make fake data
set.seed(20180809)
fkdt <- tibble(Subject = as.factor(1:10),
Years = sample(4:20, 10, replace = T),
censor = sample(c("Censor", rep("Event", 2)), 10,
replace = T))
# plot with shapes to indicate censoring or event
ggplot(fkdt, aes(Subject, Years)) +
geom_bar(stat = "identity", width = 0.5) +
geom_point(data = fkdt,
aes(Subject, Years, color = censor, shape = censor),
size = 6) +
coord_flip() +
theme_minimal() +
theme(legend.title = element_blank(),
legend.position = "bottom")
```
In this example, how would we compute the proportion who are event-free at 10 years?
Subjects 6 and 7 were **event-free at 10 years**. Subjects 2, 9, and 10 had the **event before 10 years**. Subjects 1, 3, 4, 5, and 8 were **censored before 10 years**, so we don't know whether they had the event or not by 10 years - how do we incorporate these subjects into our estimate?
## Distribution of follow-up time
- Censored subjects still provide information so must be appropriately included in the analysis
- Distribution of follow-up times is skewed, and may differ between censored patients and those with events
- Follow-up times are always positive
```{r fuptimes, echo = FALSE}
ggplot(lung, aes(x = time, fill = factor(status))) +
geom_histogram(bins = 25, alpha = 0.6, position = "identity") +
scale_fill_manual(values = c("blue", "red"), labels = c("Censored", "Dead")) +
ezfun::theme_ezbasic() +
labs(x = "Days",
y = "Count")
```
## Components of survival data
For subject $i$:
1. Event time $T_i$
2. Censoring time $C_i$
3. Event indicator $\delta_i$:
- 1 if event observed (i.e. $T_i \leq C_i$)
- 0 if censored (i.e. $T_i > C_i$)
4. Observed time $Y_i = \min(T_i, C_i)$
The observed times and an event indicator are provided in the `lung` data
- time: Survival time in days ($Y_i$)
- status: censoring status 1=censored, 2=dead ($\delta_i$)
```{r viewlung, echo = FALSE}
kable(head(lung))
```
## Dealing with dates in R
Data will often come with start and end dates rather than pre-calculated survival times. The first step is to make sure these are formatted as dates in R.
Let's create a small example dataset with variables `sx_date` for surgery date and `last_fup_date` for the last follow-up date.
```{r datedata}
date_ex <-
tibble(
sx_date = c("2007-06-22", "2004-02-13", "2010-10-27"),
last_fup_date = c("2017-04-15", "2018-07-04", "2016-10-31")
)
date_ex
```
We see these are both character variables, which will often be the case, but we need them to be formatted as dates.
## Formatting dates - base R
```{r format_date1}
date_ex %>%
mutate(
sx_date = as.Date(sx_date, format = "%Y-%m-%d"),
last_fup_date = as.Date(last_fup_date, format = "%Y-%m-%d")
)
```
- Note that in base `R` the format must include the separator as well as the symbol. e.g. if your date is in format m/d/Y then you would need `format = "%m/%d/%Y"`
- See a full list of date format symbols at [https://www.statmethods.net/input/dates.html](https://www.statmethods.net/input/dates.html)
## Formatting dates - lubridate package
We can also use the `lubridate` package to format dates. In this case, use the `ymd` function
```{r format_date2, message = FALSE}
date_ex %>%
mutate(
sx_date = ymd(sx_date),
last_fup_date = ymd(last_fup_date)
)
```
- Note that unlike the base `R` option, the separators do not need to be specified
- The help page for `?dmy` will show all format options.
## Calculating survival times - base R
Now that the dates formatted, we need to calculate the difference between start and end time in some units, usually months or years. In base `R`, use `difftime` to calculate the number of days between our two dates and convert it to a numeric value using `as.numeric`. Then convert to years by dividing by `365.25`, the average number of days in a year.
```{r format_for_real, echo = FALSE}
# First need to actually format the dates in the date_ex dataset
date_ex <-
date_ex %>%
mutate(last_fup_date = ymd(last_fup_date),
sx_date = ymd(sx_date))
```
```{r difftime_ex1}
date_ex %>%
mutate(
os_yrs =
as.numeric(
difftime(last_fup_date,
sx_date,
units = "days")) / 365.25
)
```
## Calculating survival times - lubridate
Using the `lubridate` package, the operator `%--%` designates a time interval, which is then converted to the number of elapsed seconds using `as.duration` and finally converted to years by dividing by `dyears(1)`, which gives the number of seconds in a year.
```{r difftime_ex2, message = FALSE, warning = FALSE}
date_ex %>%
mutate(
os_yrs =
as.duration(sx_date %--% last_fup_date) / dyears(1)
)
```
- *Note*: we need to load the `lubridate` package using a call to `library` in order to be able to access the special operators (similar to situation with pipes)
## Event indicator
For the components of survival data I mentioned the event indicator:
Event indicator $\delta_i$:
- 1 if event observed (i.e. $T_i \leq C_i$)
- 0 if censored (i.e. $T_i > C_i$)
However, in `R` the `Surv` function will also accept TRUE/FALSE (TRUE = event) or 1/2 (2 = event).
In the `lung` data, we have:
- status: censoring status 1=censored, 2=dead
## Survival function
The probability that a subject will survive beyond any given specified time
$$S(t) = Pr(T>t) = 1 - F(t)$$
$S(t)$: survival function
$F(t) = Pr(T \leq t)$: cumulative distribution function
In theory the survival function is smooth; in practice we observe events on a discrete time scale.
## Survival probability
- **Survival probability** at a certain time, $S(t)$, is a conditional probability of surviving beyond that time, given that an individual has survived just prior to that time.
- Can be estimated as the number of patients who are alive without loss to follow-up at that time, divided by the number of patients who were alive just prior to that time
- The **Kaplan-Meier** estimate of survival probability is the product of these conditional probabilities up until that time
- At time 0, the survival probability is 1, i.e. $S(t_0) = 1$
## Creating survival objects
The Kaplan-Meier method is the most common way to estimate survival times and probabilities. It is a non-parametric approach that results in a step function, where there is a step down each time an event occurs.
- The `Surv` function from the `survival` package creates a survival object for use as the response in a model formula. There will be one entry for each subject that is the survival time, which is followed by a `+` if the subject was censored. Let's look at the first 10 observations:
```{r survfunc}
Surv(lung$time, lung$status)[1:10]
```
## Estimating survival curves with the Kaplan-Meier method
- The `survfit` function creates survival curves based on a formula. Let's generate the overall survival curve for the entire cohort, assign it to object `f1`, and look at the `names` of that object:
```{r lung_survfit}
f1 <- survfit(Surv(time, status) ~ 1, data = lung)
names(f1)
```
Some key components of this `survfit` object that will be used to create survival curves include:
- `time`, which contains the start and endpoints of each time interval
- `surv`, which contains the survival probability corresponding to each `time`
## Kaplan-Meier plot - base R
Now we plot the `survfit` object in base `R` to get the Kaplan-Meier plot.
```{r}
plot(survfit(Surv(time, status) ~ 1, data = lung),
xlab = "Days",
ylab = "Overall survival probability")
```
- The default plot in base `R` shows the step function (solid line) with associated confidence intervals (dotted lines)
- Horizontal lines represent survival duration for the interval
- An interval is terminated by an event
- The height of vertical lines show the change in cumulative probability
- Censored observations, indicated by tick marks, reduce the cumulative survival between intervals. (*Note* the tick marks for censored patients are not shown by default, but could be added using the option `mark.time = TRUE`)
## Kaplan-Meier plot - ggsurvplot
Alternatively, the `ggsurvplot` function from the `survminer` package is built on `ggplot2`, and can be used to create Kaplan-Meier plots. Checkout the [cheatsheet](https://rpkgs.datanovia.com/survminer/survminer_cheatsheet.pdf) for the `survminer` package.
```{r, message = FALSE, warning = FALSE}
ggsurvplot(
fit = survfit(Surv(time, status) ~ 1, data = lung),
xlab = "Days",
ylab = "Overall survival probability")
```
- The default plot using `ggsurvplot` shows the step function (solid line) with associated confidence bands (shaded area).
- The tick marks for censored patients are shown by default, somewhat obscuring the line itself in this example, and could be supressed using the option `censor = FALSE`
## Estimating $x$-year survival
One quantity often of interest in a survival analysis is the probability of surviving beyond a certain number ($x$) of years.
For example, to estimate the probability of survivng to $1$ year, use `summary` with the `times` argument (*Note* the `time` variable in the `lung` data is actually in days, so we need to use `times = 365.25`)
```{r 5yrest}
summary(survfit(Surv(time, status) ~ 1, data = lung), times = 365.25)
```
We find that the $1$-year probability of survival in this study is `r round(summary(survfit(Surv(time, status) ~ 1, data = lung), times = 365.25)$surv * 100)`%.
The associated lower and upper bounds of the 95\% confidence interval are also displayed.
## $x$-year survival and the survival curve
The $1$-year survival probability is the point on the y-axis that corresponds to $1$ year on the x-axis for the survival curve.
```{r, message = FALSE, echo = FALSE, fig.height = 5}
plot_main <-
ggsurvplot(
data = lung,
fit = f1,
xlab = "Months",
legend = "none",
xscale = 30.4,
break.x.by = 182.4,
risk.table = TRUE,
risk.table.y.text = FALSE)
plot1 <- plot_main
plot1$plot <- plot1$plot +
geom_segment(x = 365.25, xend = 365.25, y = -0.05, yend = 0.4092416,
size = 1.5) +
geom_segment(x = 365.25, xend = -40, y = 0.4092416, yend = 0.4092416,
size = 1.5,
arrow = arrow(length = unit(0.2, "inches")))
plot1
```
## $x$-year survival is often estimated incorrectly
What happens if you use a "naive" estimate?
`r table(lung$status[lung$time <= 365.25])[2]` of the `r nrow(lung)` patients died by $1$ year so:
$$\Big(1 - \frac{121}{228}\Big) \times 100 = 47\%$$
- You get an **incorrect** estimate of the $1$-year probability of survival when you ignore the fact that `r table(lung$status[lung$time <= 365.25])[1]` patients were censored before $1$ year.
- Recall the **correct** estimate of the $1$-year probability of survival was `r round(summary(f1, times = 365.25)$surv * 100)`%.
## Impact on $x$-year survival of ignoring censoring
- Imagine two studies, each with 228 subjects. There are 165 deaths in each study. No censoring in one (orange line), 63 patients censored in the other (blue line)
- Ignoring censoring leads to an **overestimate** of the overall survival probability, because the censored subjects only contribute information for **part** of the follow-up time, and then fall out of the risk set, thus pulling down the cumulative probability of survival
```{r echo = FALSE, message = FALSE, warning = FALSE, fig.height = 6}
fakedata2 <- lung %>%
mutate(time = ifelse(status == 2, time, 1022),
group = "No censoring") %>%
full_join(mutate(lung, group = "With censoring"))
fit3 <- survfit(Surv(time, status) ~ group, data = fakedata2)
ggsurvplot(
data = fakedata2,
fit = fit3,
xlab = "Months",
legend = "bottom",
legend.title = "",
legend.labs = c("No censoring", "With censoring"),
xscale = 30.4,
break.x.by = 182.4,
risk.table = TRUE,
risk.table.y.text = FALSE)
```
## Estimating median survival time
Another quantity often of interest in a survival analysis is the average survival time, which we quantify using the median. Survival times are not expected to be normally distributed so the mean is not an appropriate summary.
We can obtain this directly from our `survfit` object
```{r}
survfit(Surv(time, status) ~ 1, data = lung)
```
We see the median survival time is `r round(summary(f1)$table["median"], 1)` days The lower and upper bounds of the 95\% confidence interval are also displayed.
## Median survival time and the survival curve
Median survival is the time corresponding to a survival probability of $0.5$:
```{r, message = FALSE, echo = FALSE, fig.height = 5}
plot2 <- plot_main
plot2$plot <- plot2$plot +
geom_segment(x = -45, xend = 310, y = 0.5, yend = 0.5, size = 1.5) +
geom_segment(x = 310, xend = 310, y = 0.5, yend = -0.03, size = 1.5,
arrow = arrow(length = unit(0.2, "inches")))
plot2
```
## Median survival is often estimated incorrectly
What happens if you use a "naive" estimate?
Summarize the median survival time among the `r table(lung$status)[2]` patients who died
```{r}
lung %>%
filter(status == 2) %>%
summarize(median_surv = median(time))
```
- You get an **incorrect** estimate of median survival time of `r round(median(lung$time[lung$status == 2]), 1)` days when you ignore the fact that censored patients also contribute follow-up time.
- Recall the **correct** estimate of median survival time is `r round(summary(f1)$table["median"], 1)` days.
## Impact on median survival of ignoring censoring
- Ignoring censoring creates an artificially lowered survival curve because the follow-up time that censored patients contribute is excluded (purple line)
- The true survival curve for the `lung` data is shown in blue for comparison
```{r echo = FALSE, fig.height = 6, message = FALSE, warning = FALSE}
fakedata <- lung %>%
filter(status == 2) %>%
mutate(group = "Ignoring censoring") %>%
full_join(mutate(lung, group = "With censoring"))
fit2 <- survfit(Surv(time, status) ~ group, data = fakedata)
ggsurvplot(
data = fakedata,
fit = fit2,
xlab = "Months",
legend = "bottom",
legend.title = "",
legend.labs = c("Ignoring censoring", "With censoring"),
xscale = 30.4,
break.x.by = 182.4,
risk.table = TRUE,
risk.table.y.text = FALSE)
```
## Comparing survival times between groups
- We can conduct between-group significance tests using a log-rank test
- The log-rank test equally weights observations over the entire follow-up time and is the most common way to compare survival times between groups
- There are versions that more heavily weight the early or late follow-up that could be more appropriate depending on the research question (see `?survdiff` for different test options)
We get the log-rank p-value using the `survdiff` function. For example, we can test whether there was a difference in survival time according to sex in the `lung` data
```{r}
survdiff(Surv(time, status) ~ sex, data = lung)
```
## Extracting information from a survdiff object
It's actually a bit cumbersome to extract a p-value from the results of `survdiff`. Here's a line of code to do it
```{r}
sd <- survdiff(Surv(time, status) ~ sex, data = lung)
1 - pchisq(sd$chisq, length(sd$n) - 1)
```
Or there is the `sdp` function in the `ezfun` package, which you can install using `devtools::install_github("zabore/ezfun")`. It returns a formatted p-value
```{r}
ezfun::sdp(sd)
```
## The Cox regression model
We may want to quantify an effect size for a single variable, or include more than one variable into a regression model to account for the effects of multiple variables.
The Cox regression model is a semi-parametric model that can be used to fit univariable and multivariable regression models that have survival outcomes.
$$h(t|X_i) = h_0(t) \exp(\beta_1 X_{i1} + \cdots + \beta_p X_{ip})$$
$h(t)$: hazard, or the instantaneous rate at which events occur
$h_0(t)$: underlying baseline hazard
Some key assumptions of the model:
- non-informative censoring
- proportional hazards
*Note*: parametric regression models for survival outcomes are also available, but they won't be addressed in this training
We can fit regression models for survival data using the `coxph` function, which takes a `Surv` object on the left hand side and has standard syntax for regression formulas in `R` on the right hand side.
```{r}
coxph(Surv(time, status) ~ sex, data = lung)
```
## Formatting Cox regression results
We can see a tidy version of the output using the `tidy` function from the `broom` package:
```{r}
broom::tidy(
coxph(Surv(time, status) ~ sex, data = lung),
exp = TRUE
) %>%
kable()
```
Or use `tbl_regression` from the `gtsummary` package
```{r}
coxph(Surv(time, status) ~ sex, data = lung) %>%
gtsummary::tbl_regression(exp = TRUE)
```
## Hazard ratios
- The quantity of interest from a Cox regression model is a **hazard ratio (HR)**. The HR represents the ratio of hazards between two groups at any particular point in time.
- The HR is interpreted as the instantaneous rate of occurrence of the event of interest in those who are still at risk for the event. It is **not** a risk, though it is commonly interpreted as such.
- If you have a regression parameter $\beta$ (from column `estimate` in our `coxph`) then HR = $\exp(\beta)$.
- A HR < 1 indicates reduced hazard of death whereas a HR > 1 indicates an increased hazard of death.
- So our HR = 0.59 implies that around 0.6 times as many females are dying as males, at any given time.
```{r echo = FALSE, fig.height = 6, warning = FALSE, message = FALSE}
fit4 <- survfit(Surv(time, status) ~ sex, data = lung)
ggsurvplot(data = lung,
fit = fit4,
xlab = "Months",
xscale = 30.4,
break.x.by = 182.4,
fun = "cumhaz",
legend.title = "",
legend.labs = c("Male", "Female"),
legend = "bottom",
risk.table = TRUE,
risk.table.y.text = FALSE)
```
# Part 2: Landmark Analysis and Time Dependent Covariates
In Part 1 we covered using log-rank tests and Cox regression to examine associations between covariates of interest and survival outcomes.
But these analyses rely on the covariate being measured at **baseline**, that is, before follow-up time for the event begins.
What happens if you are interested in a covariate that is measured **after** follow-up time begins?
## Example: tumor response
Example: Overall survival is measured from treatment start, and interest is in the association between complete response to treatment and survival.
- Anderson et al (JCO, 1983) described why tradional methods such as log-rank tests or Cox regression are biased in favor of responders in this scenario and proposed the landmark approach.
- The **null hypothesis** in the landmark approach is that survival from landmark does not depend on response status at landmark.
> Anderson, J., Cain, K., & Gelber, R. (1983). Analysis of survival by tumor response. Journal of Clinical Oncology : Official Journal of the American Society of Clinical Oncology, 1(11), 710-9.
## Other examples
Some other possible covariates of interest in cancer research that may not be measured at baseline include:
- tranplant failure
- graft versus host disease
- second resection
- adjuvant therapy
- compliance
- adverse events
## Example data - the BMT dataset from SemiCompRisks package
Data on 137 bone marrow transplant patients. Variables of interest include:
- `T1` time (in days) to death or last follow-up
- `delta1` death indicator; 1-Dead, 0-Alive
- `TA` time (in days) to acute graft-versus-host disease
- `deltaA` acute graft-versus-host disease indicator; 1-Developed acute graft-versus-host disease, 0-Never developed acute graft-versus-host disease
Let's load the data for use in examples throughout
```{r}
data(BMT, package = "SemiCompRisks")
```
## Landmark method
1. Select a fixed time after baseline as your landmark time. *Note*: this should be done based on clinical information, prior to data inspection
2. Subset population for those followed at least until landmark time. *Note*: always report the number excluded due to the event of interest or censoring prior to the landmark time.
3. Calculate follow-up from landmark time and apply traditional log-rank tests or Cox regression
In the `BMT` data interest is in the association between acute graft versus host disease (aGVHD) and survival. But aGVHD is assessed after the transplant, which is our baseline, or start of follow-up, time.
**Step 1** Select landmark time
Typically aGVHD occurs within the first 90 days following transplant, so we use a 90-day landmark.
Interest is in the association between acute graft versus host disease (aGVHD) and survival. But aGVHD is assessed after the transplant, which is our baseline, or start of follow-up, time.
**Step 2** Subset population for those followed at least until landmark time
```{r}
lm_dat <-
BMT %>%
filter(T1 >= 90)
```
This reduces our sample size from `r nrow(BMT)` to `r nrow(lm_dat)`.
- All `r nrow(BMT) - nrow(lm_dat)` excluded patients died before the 90 day landmark
Interest is in the association between acute graft versus host disease (aGVHD) and survival. But aGVHD is assessed after the transplant, which is our baseline, or start of follow-up, time.
**Step 3** Calculate follow-up time from landmark and apply traditional methods.
```{r}
lm_dat <-
lm_dat %>%
mutate(
lm_T1 = T1 - 90
)
lm_fit <- survfit(Surv(lm_T1, delta1) ~ deltaA, data = lm_dat)
```
```{r echo = FALSE, fig.height = 6, warning = FALSE, message = FALSE}
ggsurvplot(
fit = lm_fit,
data = lm_dat,
xlab = "Days from 90-day landmark",
risk.table = TRUE,
risk.table.y.text = FALSE,
pval = TRUE
)
```
## Cox regression landmark example using BMT data
In Cox regression you can use the `subset` option in `coxph` to exclude those patients who were not followed through the landmark time
```{r}
coxph(
Surv(T1, delta1) ~ deltaA,
subset = T1 >= 90,
data = BMT
) %>%
gtsummary::tbl_regression(exp = TRUE)
```
## Time-dependent covariate
An alternative to a landmark analysis is incorporation of a time-dependent covariate. This may be more appropriate when
- the value of a covariate is changing over time
- there is not an obvious landmark time
- use of a landmark would lead to many exclusions
## Time-dependent covariate data setup
Analysis of time-dependent covariates in `R` requires setup of a special dataset. See the detailed paper on this by the author of the `survival` package [Using Time Dependent Covariates and Time Dependent Coefficients in the Cox Model](https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf).
There was no ID variable in the `BMT` data, which is needed to create the special dataset, so create one called `my_id`.
```{r}
bmt <- rowid_to_column(BMT, "my_id")
```
Use the `tmerge` function with the `event` and `tdc` function options to create the special dataset.
- `tmerge` creates a long dataset with multiple time intervals for the different covariate values for each patient
- `event` creates the new event indicator to go with the newly created time intervals
- `tdc` creates the time-dependent covariate indicator to go with the newly created time intervals
```{r}
td_dat <-
tmerge(
data1 = bmt %>% select(my_id, T1, delta1),
data2 = bmt %>% select(my_id, T1, delta1, TA, deltaA),
id = my_id,
death = event(T1, delta1),
agvhd = tdc(TA)
)
```
## Time-dependent covariate - single patient example
To see what this does, let's look at the data for the first 5 individual patients.
The variables of interest in the original data looked like
```{r echo = FALSE}
bmt %>%
select(my_id, T1, delta1, TA, deltaA) %>%
filter(my_id %in% seq(1, 5))
```
The new dataset for these same patients looks like
```{r echo = FALSE}
td_dat %>%
filter(my_id %in% seq(1, 5))
```
## Time-dependent covariate - Cox regression
Now we can analyze this time-dependent covariate as usual using Cox regression with `coxph` and an alteration to our use of `Surv` to include arguments to both `time` and `time2`
```{r}
coxph(
Surv(time = tstart, time2 = tstop, event = death) ~ agvhd,
data = td_dat
) %>%
gtsummary::tbl_regression(exp = TRUE)
```
## Summary
We find that acute graft versus host disease is not significantly associated with death using either landmark analysis or a time-dependent covariate.
Often one will want to use landmark analysis for visualization of a single covariate, and Cox regression with a time-dependent covariate for univariable and multivariable modeling.
# Part 3: Competing Risks
## Packages
The primary package for use in competing risks analyses is
- `cmprsk`
```{r}
library(cmprsk)
```
## What are competing risks?
When subjects have multiple possible events in a time-to-event setting
Examples:
- recurrence
- death from disease
- death from other causes
- treatment response
All or some of these (among others) may be possible events in any given study.
## So what's the problem?
Unobserved dependence among event times is the fundamental problem that leads to the need for special consideration.
For example, one can imagine that patients who recur are more likely to die, and therefore times to recurrence and times to death would not be independent events.
## Background on competing risks
Two approaches to analysis in the presence of multiple potential outcomes:
1. Cause-specific hazard of a given event: this represents the rate per unit of time of the event among those not having failed from other events
2. Cumulative incidence of given event: this represents the rate per unit of time of the event as well as the influence of competing events
Each of these approaches may only illuminate one important aspect of the data while possibly obscuring others, and the chosen approach should depend on the question of interest.
## A bunch of additional notes and references
- When the events are independent (almost never true), cause-specific hazards is unbiased
- When the events are dependent, a variety of results can be obtained depending on the setting
- Cumulative incidence using Kaplan-Meier is always >= cumulative incidence using competing risks methods, so can only lead to an overestimate of the cumulative incidence, the amount of overestimation depends on event rates and dependence among events
- To establish that a covariate is indeed acting on the event of interest, cause-specific hazards may be preferred for treatment or pronostic marker effect testing
- To establish overall benefit, subdistribution hazards may be preferred for building prognostic nomograms or considering health economic effects to get a better sense of the influence of treatment and other covariates on an absolute scale
> Dignam JJ, Zhang Q, Kocherginsky M. The use and interpretation of competing risks regression models. Clin Cancer Res. 2012;18(8):2301-8.
> Kim HT. Cumulative incidence in competing risks data and competing risks regression analysis. Clin Cancer Res. 2007 Jan 15;13(2 Pt 1):559-65.
> Satagopan JM, Ben-Porat L, Berwick M, Robson M, Kutler D, Auerbach AD. A note on competing risks in survival data analysis. Br J Cancer. 2004;91(7):1229-35.
> Austin, P., & Fine, J. (2017). Practical recommendations for reporting Fine‐Gray model analyses for competing risk data. Statistics in Medicine, 36(27), 4391-4400.
## Cumulative incidence for competing risks
- Non-parametric estimation of the cumulative incidence
- Estimates the cumulative incidence of the event of interest
- At any point in time the sum of the cumulative incidence of each event is equal to the total cumulative incidence of any event (not true in the cause-specific setting)
- Gray's test is a modified Chi-squared test used to compare 2 or more groups
## Example Melanoma data from the MASS package
We use the `Melanoma` data from the `MASS` package to illustrate these concepts. It contains variables:
- `time` survival time in days, possibly censored.
- `status` 1 died from melanoma, 2 alive, 3 dead from other causes.
- `sex` 1 = male, 0 = female.
- `age` age in years.
- `year` of operation.
- `thickness` tumor thickness in mm.
- `ulcer` 1 = presence, 0 = absence.
```{r}
data(Melanoma, package = "MASS")
```
## Cumulative incidence in Melanoma data
Estimate the cumulative incidence in the context of competing risks using the `cuminc` function.
*Note*: in the `Melanoma` data, censored patients are coded as $2$ for `status`, so we cannot use the `cencode` option default of $0$
```{r}
cuminc(Melanoma$time, Melanoma$status, cencode = 2)
```
## Plot the cumulative incidence - base R
Generate a base `R` plot with all the defaults.
```{r}
ci_fit <-
cuminc(
ftime = Melanoma$time,
fstatus = Melanoma$status,
cencode = 2
)
```
```{r}
plot(ci_fit, xlab = "Days")
```
In the legend:
- The first number indicates the group, in this case there is only an overall estimate so it is $1$ for both
- The second number indicates the event type, in this case the solid line is $1$ for death from melanoma and the dashed line is $3$ for death from other causes
## Plot the cumulative incidence - ggcompetingrisks
We can also plot the cumulative incidence using the `ggscompetingrisks` function from the `survminer` package.
In this case we get a panel labeled according to the group, and a legend labeled event, indicating the type of event for each line.
*Notes*
- you can use the option `multiple_panels = FALSE` to have all groups plotted on a single panel
- unlike base `R` the y-axis does not go to 1 by default, so you must change it manually
```{r}
ggcompetingrisks(ci_fit, xlab = "Days")
```
## Compare cumulative incidence between groups
In `cuminc` Gray's test is used for between-group tests.
As an example, compare the `Melanoma` outcomes according to `ulcer`, the presence or absence of ulceration. The results of the tests can be found in `Tests`.
```{r}
ci_ulcer <-
cuminc(
ftime = Melanoma$time,
fstatus = Melanoma$status,
group = Melanoma$ulcer,
cencode = 2
)
ci_ulcer[["Tests"]]
```
## Plot cumulative incidence according to group - ggcompetingrisks
```{r}
ggcompetingrisks(
fit = ci_ulcer,
multiple_panels = FALSE,
xlab = "Days",
ylab = "Cumulative incidence of event",
title = "Death by ulceration",
ylim = c(0, 1)
)
```
## Plotting cumulative incidence by group - manually
*Note* I personally find the `ggcompetingrisks` function to be lacking in customization, especially compared to `ggsurvplot`. I typically do my own plotting, by first creating a tidy dataset of the `cuminc` fit results, and then plotting the results. See the source code for this presentation for details of the underlying code.
```{r}
ciplotdat <-