To measure inference performance in aibench, you first need to load the data using the instructions in the overall Reference data Loading section.
Once the data is loaded,
just use the corresponding aibench_run_inference_redisai
binary for the DL Solution
being tested:
# make sure you're on the root project folder
cd $GOPATH/src/github.com/RedisAI/aibench
./scripts/run_inference_redisai.sh
The following diagram illustrates the sequence of requests made for each inference.
docker pull redisai/redisai
# Start RedisAI container
docker run -t --rm -p 6379:6379
Follow building and running from source while ensuring one of the high availability methods, both OSS or Enterprise.