From f31173321b516e50b377dcdac536267de95c9d65 Mon Sep 17 00:00:00 2001 From: Andrei Stoian <95410270+andrei-stoian-zama@users.noreply.github.com> Date: Tue, 10 Oct 2023 08:50:46 +0200 Subject: [PATCH] chore: fix cifar runtime --- README.md | 2 +- use_case_examples/cifar/cifar_brevitas_training/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2a7464f9d..398506488 100644 --- a/README.md +++ b/README.md @@ -158,7 +158,7 @@ Various tutorials are given for [built-in models](docs/built-in-models/ml_exampl - [Sentiment analysis with transformers](use_case_examples/sentiment_analysis_with_transformer): predict if an encrypted tweet / short message is positive, negative or neutral, using FHE. The [live interactive](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis) demo is available on Hugging Face. This [blog post](https://huggingface.co/blog/sentiment-analysis-fhe) explains how this demo works! -- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): train a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. +- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): train a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~4 minutes per image and shows an accuracy of 88.7%. - [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. diff --git a/use_case_examples/cifar/cifar_brevitas_training/README.md b/use_case_examples/cifar/cifar_brevitas_training/README.md index c83d1f26d..f92f2309d 100644 --- a/use_case_examples/cifar/cifar_brevitas_training/README.md +++ b/use_case_examples/cifar/cifar_brevitas_training/README.md @@ -98,7 +98,7 @@ Experiments were conducted on an m6i.metal machine offering 128 CPU cores and 51 | VGG FHE (simulation\*) | 6 bits | 86.0 | | VGG FHE | 6 bits | 86.0\*\* | -We ran the FHE inference over 10 examples and achieved 100% similar predictions between the simulation and FHE. The overall accuracy for the entire data-set is expected to match the simulation. The original model with a maximum of 13 bits of precision ran in around 9 hours on the specified hardware. Using the rounding approach, the final model ran in **31 minutes**, providing a speedup factor of 18x while preserving accuracy. This significant performance improvement demonstrates the benefits of the rounding operator in the FHE setting. +We ran the FHE inference over 10 examples and achieved 100% similar predictions between the simulation and FHE. The overall accuracy for the entire data-set is expected to match the simulation. The original model (no rounding) with a maximum of 13 bits of precision runs in around 9 hours on the specified hardware. Using the rounding approach, the final model ran in **4 minutes**. This significant performance improvement demonstrates the benefits of the rounding operator in the FHE setting. \* Simulation is used to evaluate the accuracy in the clear for faster debugging. \*\* We ran the FHE inference over 10 examples and got 100% similar predictions between the simulation and FHE. The overall accuracy for the entire data-set is expected to match the simulation.