DeepLabCut is an open-source toolbox widely used by researchers to train networks for pose estimation in animals. We present here one such model for the analysis of single housed mice exposed to an array of visual stimuli.
We provide an array of tools and scripts to analyze top down videos featuring single housed mice. We used open-source software DeepLabCut to train a model that performs markerless position estimation. This model is included in a ready-to-use folder structure that can be loaded into DeepLabCut and used for both inference and further training. Additionally, we include various Python scripts for both pre- and post-processing of data used in this model.
We used a total of 48 markers to train the model. These include 12 mouse body parts, 15 cage features, and 21 different visual stimuli. Markers may be updated in future versions to include additional stimuli.
Our current method of video acquisition combines 8 cages into a single video file. This video can be automatically pre-processed and separated into 8 videos for analysis using our model.
We have included a project folder with the necessary files to formulate predictions on new data. This folder contains a file "config.yaml" that can be loaded into DeepLabCut.
The series of X, Y coordinates generated by DeepLabCut can be further evaluated to generate behavioral paradigms. Here we provide jupyter-notebook demos that can load output CSVs from DeepLabCut and measure a number of behaviors, with the option to filter out low-likelihood readings.