- Please read the Statement of Need that your team will need to leverage to write your work proposal.
- Use the team_repo template for your writeup and submission.
- Please make sure your team repo is private.
- You will need to share your repo with me,
hathawayj
, and the current semester team.
- Use this template repo to start your private repo within our organization. This repo will allow you to practice and share your personal work.
- Review the scripts in this repo and ensure you can handle the SafeGraph data. Within two weeks, we will have a coding challenge using SafeGraph data.
The scripts in this template repository can help you get a picture of digesting the SafeGraph format.
This script depends on the Tidyverse and has two parsing functions at the top. There are some examples throughout the script of the issues with handling the SafeGraph data. The final dat_all
object provides the full call that processes the data into a clean nested Tibble.
This script depends on the safegraph_functions.py
file for some functions that can parse the nested dictionaries and lists within the POI. The Python functions create new data objects of the list and dictionary variables within the dataset.
You can read additional detail at APIs/APIs.md
SafeGraph has an API to request data. We will use the API to build our datasets for use in Spark. We can figure out the API locally first.
An elementary API example of letting you evaluate that your system can make calls to a GraphQL API. It only uses the requests
package in Python.
This script is the more extended and diverse example of using a GraphQL API. Specifically, it provides three examples of requesting data from the SafeGraph API.
Note the use of the following lines of code to store and retrieve our API key correctly. This code is also exemplified in create_environ.py
script.
import os
from dotenv import load_dotenv
load_dotenv()
sfkey = os.environ.get("SAFEGRAPH_KEY")
We use three Python packages to get data from SafeGraph - gql
, requests
, and safegraphql
. We elected to signal the start of each type of API request with the package imports spread throughout the script.
We will focus on the gql
or requests
examples for our work. We will stay in gql
and highly recommend that you don't use graphql
.
This file creates .parquet
files for upload for our cloud compute. In addition, it breaks all the nested data out into their own tables.
You can see a Colab notebook that guides you through parsing data from shop.safegraph.com. As students and faculty, we get free access to the POI, patterns, and polygons data. Please register here.
SafeGraph has a a light technical quickstart guide to working with POI and Mobility data downloaded from the SafeGraph Shop, the self-serve source for hundreds of datasets about physical places. The goal of their guide is to get you working effectively with the SafeGraph data as fast as possible.
They filmed a series of YouTube videos which provide context for each step:
- SafeGraph Shop Python Quickstart Part 1: Data Preparation
- SafeGraph Shop Python Quickstart Part 2: Exploding Nested JSON Fields
- SafeGraph Shop Python Quickstart Part 3: Joining to Census Data
- SafeGraph Shop Python Quickstart Part 4: Scaling
They recommend joining the SafeGraph Community Slack Channel, a fantastic resource for live, in-person support. Finally, check out their documentation for an exhaustive guide to our data.
SafeGraph has done some work to assess how representative its sample of devices is of the entire population. Specifically, check out the Measure and Correct Sampling Bias section of the Data Science Resources. A recent external audit was done that might also be of value. The audit finds SafeGraph’s panel underrepresents older people and minorities. We hope that normalization techniques correct for some of that bias, but it is still an important consideration. ref