diff --git a/docs/gpt-in-a-box/kubernetes/v0.2/custom_model.md b/docs/gpt-in-a-box/kubernetes/v0.2/custom_model.md
index f1e5ff0e..57096966 100644
--- a/docs/gpt-in-a-box/kubernetes/v0.2/custom_model.md
+++ b/docs/gpt-in-a-box/kubernetes/v0.2/custom_model.md
@@ -1,7 +1,11 @@
# Custom Model Support
-We provide the capability to generate a MAR file with custom models and start an inference server using Kubeflow serving.
+In some cases you may want to use a custom model, e.g. a custom fine-tuned model. We provide the capability to generate a MAR file with custom models and start an inference server using Kubeflow serving.
## Generate Model Archive File for Custom Models
+
+!!! note
+ The model files should be placed in an NFS share accessible by the Nutanix package. This directory will be passed to the --model_path argument. You'll also need to provide the --output path where you want the model archive export to be stored.
+
To generate the MAR file, run the following:
```
python3 $WORK_DIR/llm/generate.py --skip_download [--repo_version --handler ] --model_name --model_path --output
diff --git a/docs/gpt-in-a-box/vm/v0.3/custom_model.md b/docs/gpt-in-a-box/vm/v0.3/custom_model.md
index 0a6ed125..f6abf945 100644
--- a/docs/gpt-in-a-box/vm/v0.3/custom_model.md
+++ b/docs/gpt-in-a-box/vm/v0.3/custom_model.md
@@ -1,7 +1,11 @@
# Custom Model Support
-We provide the capability to generate a MAR file with custom model files and start an inference server using it with Torchserve.
+In some cases you may want to use a custom model, e.g. a custom fine-tuned model. We provide the capability to generate a MAR file with custom model files and start an inference server using it with Torchserve.
## Generate Model Archive File for Custom Models
+
+!!! note
+ The model archive files should be placed in a directory accessible by the Nutanix package, e.g. /home/ubuntu/models/<custom_model_name>/model_files. This directory will be passed to the --model_path argument. You'll also need to provide the --mar_output path where you want the model archive export to be stored.
+
Run the following command for generating the Model Archive File (MAR) with the Custom Model files :
```
python3 $WORK_DIR/llm/generate.py --skip_download [--repo_version --handler ] --model_name --model_path --mar_output