Survival Analysis
Using encounter histories for ruffed grouse collected for this study
(2020-21) and data collected previously during the winters of 2015-17
(Shipley et al. 2020), we constructed known fate models in Program MARK
to estimate weekly winter (Nov – Mar) survival of ruffed grouse (White
and Burnham 1999). We used a recurrent time horizon based on a
biological calendar of winter starting on 1 November and ending on 28
March (Fieberg and DelGiudice 2009). Winter field season was treated as
a grouping variable (Winter Season 1 = 2015/16; Winter Season 2 =
2016/17; Winter Season 3 = 2017/18; Winter Season 4 = 2020/21; Winter
Season 5 = 2021/22). Only birds which survived > 7 days
after trapping were used in analysis to reduce bias from elevated
mortality rates due to trapping-related stress or injury (Pollock et al.
1989). When a mortality signal was detected, we assumed that death
occurred at the midpoint between the last day that it was known to be
alive and the first day it was known to be dead (Winterstein et al.
2001). For individuals that survived multiple field seasons, we only
used the first season of data to avoid bias from censoring individuals
in subsequent seasons due to high collar failure after longer than 1
year of deployment.
We used an extension of the staggered entry Kaplan-Meier estimator to
estimate weekly survival as it does not assume constant survival for the
duration of the study and can be extended to incorporate individual
covariates, such as color morph, with a link function (Pollock 2002,
Gutiérrez et al. 2003). To test the effect of winter weather and habitat
on survival, we developed separate sets of a priori models of
weekly grouse survival, corresponding to predictions about the
independent effects of intrinsic traits (e.g., age and sex), snow and
temperature, and habitat on overwinter survival of grouse (Morin et al.
2020). We ranked and selected models in each set using Akaike’s
Information Criterion corrected for small sample size
(AICc; Akaike 1973, Burnham and Anderson 1998). We did
not include variables in the same model if they exhibited high
collinearity (r > 0.7; Dormann et al. 2013). We first ran a
preliminary set of univariate models to test for effects of intrinsic
individual traits on survival (n = 87 individuals). Of the models run
with individual covariates for sex, age, color morph, and mass at
capture, no models contained a beta value that did not overlap zero and
the top ranked model (age effect on survival) had an AIC value only 0.75
higher than the null model (Table S1). As such, we did not include
intrinsic covariates in subsequent model sets. Our subsequent model sets
were run with a sample size of 94 individuals, including 7 individuals
with unknown sex.
In our model set exploring the effects of winter weather and phenotypic
mismatch on survival, we created weekly time-varying individual
covariates by averaging daily values of abiotic weather conditions.
Values were spatially averaged across individual winter use areas to
represent weekly conditions experienced at the individual level (Fig.
2). We considered models with weekly minimum temperature (°C, average
daily minimum temperature) and snow presence or absence (proportion of
days in a week where snow depth > 0 cm). To test effects of
phenotypic mismatch with snow cover, we constructed a time-varying
variable which represented the proportion of days in a week that an
individual was phenotypically mismatched with snow cover depending on
color morph (i.e., red morph individuals were mismatched during days
with snow cover and gray morphs during days with no snow present in
their winter use area). We used a common intercept and allowed effect
slopes to vary by winter season and considered additive effects of
covariates on weekly survival.
We created another set of models to test for the effect of land cover on
individual survival using class-level landscape metrics calculated
within individual winter use areas. For the Dense Cover, Mature Forest,
Clear Cut, and Open cover classes, we calculated proportion of class
cover (PLAND), edge density (ED), and mean contiguity index (MCI;
Cushman et al. 2008). Landscape metric covariate values were converted
to proportions or scaled by 0.01 to match value ranges of other
covariates (i.e. proportion of mismatch with snow cover). We created
models to test for additive effects of landscape metrics on weekly
survival.
Using the covariates from highly supported models (ΔAICc< 2) in our winter weather and habitat model sets, we built a
unified model set to test our hypothesis that snow cover and habitat
characteristics would differently affect red and gray morph winter
survival (Morin et al. 2020). To estimate derived weekly survival
estimates by morph, we ran the top-ranked model with morph-specific mean
covariate values for each morph.