Skip to content

Commit

Permalink
chore: add FL to showcase & encrypt/decrypt example
Browse files Browse the repository at this point in the history
  • Loading branch information
andrei-stoian-zama authored Sep 29, 2023
1 parent 920677b commit 6feadd7
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 5 deletions.
28 changes: 28 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,34 @@ print(f"Similarity: {int((y_pred_fhe == y_pred_clear).mean()*100)}%")
# Similarity: 100%
```

It is also possible to call encryption, model prediction, and decryption functions separately as follows.
Executing these steps separately is equivalent to calling `predict_proba` on the model instance.

<!--pytest-codeblocks:cont-->

```python
# Predict probability for a single example
y_proba_fhe = model.predict_proba(X_test[[0]], fhe="execute")

# Quantize an original float input
q_input = model.quantize_input(X_test[[0]])

# Encrypt the input
q_input_enc = model.fhe_circuit.encrypt(q_input)

# Execute the linear product in FHE
q_y_enc = model.fhe_circuit.run(q_input_enc)

# Decrypt the result (integer)
q_y = model.fhe_circuit.decrypt(q_y_enc)

# De-quantize and post-process the result
y0 = model.post_processing(model.dequantize_output(q_y))

print("Probability with `predict_proba`: ", y_proba_fhe)
print("Probability with encrypt/run/decrypt calls: ", y0)
```

This example is explained in more detail in the [linear model documentation](docs/built-in-models/linear.md). Concrete ML built-in models
have APIs that are almost identical to their scikit-learn counterparts. It is also possible to convert PyTorch networks to FHE with the Concrete ML conversion APIs. Please refer to the [linear models](docs/built-in-models/linear.md), [tree-based models](docs/built-in-models/tree.md) and [neural networks](docs/built-in-models/neural-networks.md) documentation for more examples, showing the scikit-learn-like API of the built-in
models.
Expand Down
Binary file added docs/.gitbook/assets/demo_mnist.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 5 additions & 4 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,10 @@ Executing these steps separately is equivalent to calling `predict_proba` on the
<!--pytest-codeblocks:cont-->

```python
# Predict probability for a single example
y_proba_fhe = model.predict_proba(X_test[[0]], fhe="execute")

# Quantize an input (float)
# Quantize an original float input
q_input = model.quantize_input(X_test[[0]])

# Encrypt the input
Expand All @@ -68,11 +69,11 @@ q_y_enc = model.fhe_circuit.run(q_input_enc)
# Decrypt the result (integer)
q_y = model.fhe_circuit.decrypt(q_y_enc)

# De-quantize the result
# De-quantize and post-process the result
y0 = model.post_processing(model.dequantize_output(q_y))

print("Probability with `predict_proba`: ", y0)
print("Probability with encrypt/run/decrypt calls: ", y_proba_fhe)
print("Probability with `predict_proba`: ", y_proba_fhe)
print("Probability with encrypt/run/decrypt calls: ", y0)
```

This example shows the typical flow of a Concrete ML model:
Expand Down
13 changes: 13 additions & 0 deletions docs/getting-started/showcase.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,19 @@ Simpler tutorials that discuss only model usage and compilation are also availab
<td><a href="../../use_case_examples/titanic">use_case_examples/titanic</a></td>
<!--- end -->
</tr>
<tr>
<td><strong>Federated Learning and Private Inference</strong></td>
<td>
<p></p>
<p>Use federated learning to train a Logistic Regression while preserving training data confidentiality.
Import the model into Concrete ML and perform encrypted prediction</p>
</td>
<td></td>
<!--- start -->
<td><a href="../.gitbook/assets/demo_mnist.png">mnist.png</a></td>
<td><a href="../../use_case_examples/federated_learning">use_case_examples/federated_learning</a></td>
<!--- end -->
</tr>
<tr>
<td><strong>Neural Network Fine-tuning</strong> </td>
<td>
Expand Down
2 changes: 1 addition & 1 deletion script/doc_utils/fix_gitbook_table.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ def process_file(file_str: str):
with open(file_path_output, "w", encoding="utf-8") as fout:
print(processed_content, file=fout)

assert how_many == 8
assert how_many == 9

return True

Expand Down

0 comments on commit 6feadd7

Please sign in to comment.