Skip to content

Commit

Permalink
docs: fix broken links (#608)
Browse files Browse the repository at this point in the history
  • Loading branch information
yuxizama authored Apr 15, 2024
1 parent 3e65c1f commit 718b7c2
Show file tree
Hide file tree
Showing 17 changed files with 76 additions and 90 deletions.
22 changes: 11 additions & 11 deletions docs/advanced_examples/DecisionTreeRegressor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,18 +22,18 @@
"\n",
"> Concrete-ML is an open-source, privacy-preserving, machine learning inference framework based on fully homomorphic encryption (FHE).\n",
"> It enables data scientists without any prior knowledge of cryptography to automatically turn machine learning models into their FHE equivalent,using familiar APIs from Scikit-learn and PyTorch.\n",
"> <cite>&mdash; [Zama documentation](https://docs.zama.ai/concrete-ml/)</cite>\n",
"> <cite>&mdash; [Zama documentation](../README.md)</cite>\n",
"\n",
"This tutorial does not require a deep understanding of the technology behind concrete-ML.\n",
"Nonetheless, newcomers might be interested in reading introductory sections of the official documentation such as:\n",
"\n",
" - [What is Concrete-ML](https://docs.zama.ai/concrete-ml/) ;\n",
" - [Key Concepts](https://docs.zama.ai/concrete-ml/getting-started/concepts) ;\n",
"- [What is Concrete-ML](../README.md)\n",
"- [Key Concepts](../getting-started/concepts.md)\n",
"\n",
"In the tutorial, we will be using the following terminology:\n",
"\n",
" - plaintext: data unprotected, visible to anyone having access to it.\n",
" - ciphertext: ciphered data, need to know the secret in order to decipher the data.\n",
"- plaintext: data unprotected, visible to anyone having access to it.\n",
"- ciphertext: ciphered data, need to know the secret in order to decipher the data.\n",
"\n",
"Conventional models work with plaintext, where ConcreteML can work directly with ciphertext.\n",
"Privacy is preserved as the model does not know the secret and thus cannot decipher the data. \n",
Expand Down Expand Up @@ -350,7 +350,7 @@
"This is a first specificity of homomorphic encryption: values must first be represented as (not too big) integers before encryption.\n",
"This encoding is called quantization and `n_bits` is related to the maximum size of the quantized values.\n",
"Put it simply, lower `n_bits` means that quantization is less precise, but FHE computations are faster.\n",
"For more details, see [quantization](https://docs.zama.ai/concrete-ml/advanced-topics/quantization) from the official documentation.\n",
"For more details, see [quantization](../explanations/quantization.md) from the official documentation.\n",
"\n",
"Our model might or not gain from extra precision and/or efficiency. There is a balance to strike for the model between the two and as usual, the right balance depends on context.\n",
"For now, we observe that our models performance increases with precision, that is with higher `n_bits`."
Expand Down Expand Up @@ -778,14 +778,14 @@
"### Going Further\n",
"Some additional tools can smooth up the development workflow:\n",
"\n",
" - Selecting relevant bit-size for [quantizing](https://docs.zama.ai/concrete-ml/advanced-topics/quantization) the model.\n",
" - Alleviating the [compilation](https://docs.zama.ai/concrete-ml/advanced-topics/compilation) time by making use of the [virtual library](https://docs.zama.ai/concrete-ml/advanced-topics/compilation#simulation-with-the-virtual-library)\n",
" - Selecting relevant bit-size for [quantizing](../explanations/quantization.md) the model.\n",
" - Alleviating the [compilation](../explanations/compilation.md) time by making use of [FHE simulation](../explanations/compilation.md#fhe-simulation)\n",
"\n",
"Once the model is carefully trained and quantized, it is ready to be deployed and used in production. Here are some useful links on the subject:\n",
" \n",
" - [Inference in the Cloud](https://docs.zama.ai/concrete-ml/getting-started/cloud) summarize the steps for cloud deployment\n",
" - [Production Deployment](https://docs.zama.ai/concrete-ml/advanced-topics/client_server) offers a high-level view of how to deploy a Concrete-ML model in a client/server setting.\n",
" - [Client Server in Concrete ML](https://github.com/zama-ai/concrete-ml/blob/release/0.6.x/docs/advanced_examples/ClientServer.ipynb) provides a more hands-on approach as another tutorial."
" - [Inference in the Cloud](../getting-started/cloud.md) summarize the steps for cloud deployment\n",
" - [Production Deployment](../guides/client_server.md) offers a high-level view of how to deploy a Concrete-ML model in a client/server setting.\n",
" - [Client Server in Concrete ML](./ClientServer.ipynb) provides a more hands-on approach as another tutorial."
]
}
],
Expand Down
80 changes: 33 additions & 47 deletions docs/advanced_examples/GLMComparison.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/advanced_examples/KNearestNeighbors.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@
"\n",
"a. __Clear__: inference on non-encrypted quantized data, without any FHE execution \n",
"\n",
"b. __Simulation__: inference on non-encrypted quantized data, while simulating all FHE operations, failure probabilities and crypto-parameters. This mode of inference is recommended in the development phase. For further information, please consult [this link](https://docs.zama.ai/concrete-ml/advanced-topics/compilation#fhe-simulation)\n",
"b. __Simulation__: inference on non-encrypted quantized data, while simulating all FHE operations, failure probabilities and crypto-parameters. This mode of inference is recommended in the development phase. For further information, please consult [this documentation section](../explanations/compilation.md#fhe-simulation)\n",
"\n",
"c. __Execution in FHE__: inference on encrypted data, using actual FHE execution"
]
Expand Down
4 changes: 2 additions & 2 deletions docs/advanced_examples/LinearRegression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -227,13 +227,13 @@
"Quantization is a technique that converts continuous data (floating point, e.g., in 32-bits) to discrete numbers\n",
"within a fixed range (e.g., integer in 8-bits). This means that some information is lost during the process. However, the larger is the integers' range, the smaller the error becomes, making it acceptable to some cases.\n",
"\n",
"To learn more about quantization, please refer to this [page](https://docs.preprod.zama.ai/concrete-ml/main/advanced-topics/quantization.html).\n",
"To learn more about quantization, please refer to this [documentation section](../explanations/quantization.md).\n",
"\n",
"Regarding FHE, the input data type must be represented exclusively as integers, making the use of quantization necessary. Therefore, a linear model trained on floats is quantized into an equivalent integer model using _Post-Training Quantization_. This operation can lead to a loss of accuracy compared to the standard floating point models working on clear data. \n",
"\n",
"In practice however, this loss is usually very limited with linear FHE models as they can consider very large integers, with up to 50 bits in some cases. This means these models can quantize their inputs and weights over large number of bits (e.g., 16) while still considering data-sets containing many features (e.g., 1000). We therefore often observe almost identical performance scores (e.g., R2 score) between float, quantized and FHE models. \n",
"\n",
"To learn more about the relation between the maximum bit-width reached within a model, the bits of quantization used and the data-set's number of features, please refer to this [page](https://docs.preprod.zama.ai/concrete-ml/main/advanced-topics/pruning.html?highlight=formula#pruning-in-practice)."
"To learn more about the relation between the maximum bit-width reached within a model, the bits of quantization used and the data-set's number of features, please refer to this [documentation section](../explanations/pruning.md#pruning-in-practice)."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/advanced_examples/LinearSVR.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@
"The typical development flow of a Concrete ML model is the following:\n",
"\n",
"* The model is trained on clear (plaintext) data, as only FHE inference is currently supported.\n",
"* The resulting trained model is quantized using a `n_bits` parameter set by the user (see documentation [here](https://docs.zama.ai/concrete-ml/developer-guide/api/concrete.ml.sklearn.svm#class-linearsvr)). This parameter can either be:\n",
"* The resulting trained model is quantized using a `n_bits` parameter set by the user (see documentation [here](../built-in-models/linear.md#quantization-parameters)). This parameter can either be:\n",
" 1. a dictionary composed of `op_inputs` and `op_weights` keys. These parameters are given as integers representing the number of bits over which the associated data should be quantized.\n",
" 2. an integer, representing the number of bits over which each input and weight should be quantized. Default is 8. We try several values to test the various precisions gained for quantization. \n",
"* The quantized model is compiled to an FHE-equivalent, following 3 steps:\n",
Expand Down Expand Up @@ -495,13 +495,13 @@
"\n",
"Quantization is a technique that converts continuous data (floating point, e.g., in 32-bits) to discrete numbers within a fixed range (in our case either 6, 8, or 12 bits). This means that some information is lost during the process. However, the larger the integers' range, the smaller the error becomes, making it acceptable in some cases.\n",
"\n",
"To learn more about quantization, please refer to this [page](https://docs.zama.ai/concrete-ml/advanced-topics/quantization).\n",
"To learn more about quantization, please refer to this [documentation section](../explanations/quantization.md).\n",
"\n",
"Regarding FHE, the input data type must be represented exclusively as integers, making the use of quantization necessary. A linear model trained on floats is quantized into an equivalent integer model using *Post-Training Quantization*. This operation can lead to a loss of accuracy compared to the standard floating point models working on clear data.\n",
"\n",
"In practice, this loss is usually very limited with linear FHE models as they can consider very large integers with up to 50 bits in some cases. This means these models can quantize their inputs and weights over a large number of bits while still considering data-sets containing many features (e.g. 1000). We often observe almost identical performance scores between float, quantized, and FHE models.\n",
"\n",
"To learn more about the relation between the maximum bit-width reached within a model, the bits of quantization used, and the data-set's number of features, please refer to this [page](https://docs.zama.ai/concrete-ml/advanced-topics/pruning)."
"To learn more about the relation between the maximum bit-width reached within a model, the bits of quantization used, and the data-set's number of features, please refer to this [documentation section](../explanations/pruning.md)."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/advanced_examples/PoissonRegression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -923,16 +923,16 @@
"n_bits_values = list(range(2, 20))\n",
"concrete_deviance_scores = []\n",
"for n_bits in n_bits_values:\n",
" concrete_regressor_pca_n_bit = Pipeline(\n",
" concrete_regressor = Pipeline(\n",
" [\n",
" (\"preprocessor\", linear_model_preprocessor),\n",
" (\"regressor\", ConcretePoissonRegressor(n_bits=n_bits)),\n",
" ]\n",
" )\n",
" concrete_regressor_pca_n_bit.fit(\n",
" concrete_regressor.fit(\n",
" df_train, df_train[\"Frequency\"], regressor__sample_weight=df_train[\"Exposure\"]\n",
" )\n",
" concrete_deviance_scores.append(score_estimator(concrete_regressor_pca_n_bit, df_test))"
" concrete_deviance_scores.append(score_estimator(concrete_regressor, df_test))"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/advanced_examples/QuantizationAwareTraining.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@
"Brevitas offers a large palette of quantization parameters that can be combined. However, not all configurations are useful for FHE compatible neural networks. Here we give some tips on how to choose the best options for quantizing weights and activations for our use cases.\n",
"\n",
"For strict quantization, i. e. less that 3 bits, it's recommended, for both weights and activations to have a zero zero-point, in order to keep to keep the accumulator size low while also speeding up computation. Please refer to the \n",
"[quantization documentation](../advanced-topics/quantization.md) for more details about the usage of the zero-point. Moreover, we can force the zero-point to be zero during training, and, thus, learn weights that work well in this setting.\n",
"[quantization documentation](../explanations/quantization.md) for more details about the usage of the zero-point. Moreover, we can force the zero-point to be zero during training, and, thus, learn weights that work well in this setting.\n",
"\n",
"For less strict quantization, i. e. greater thant 3 bits, it's recommended to use `Int8ActPerTensorFloat` and `Int8WeightPerTensorFloat` from `brevitas.quant` to quantize the activations and weights respectively."
]
Expand Down
4 changes: 2 additions & 2 deletions docs/advanced_examples/RegressorComparison.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
"\n",
"Quantization is a technique that discretizes continuous data, such as floating point numbers, into a fixed range of integers. This process may result in some loss of information, but a larger integer range can reduce error, making it acceptable in some cases.\n",
"\n",
"To learn more about quantization, you can refer to this [page](https://docs.zama.ai/concrete-ml/advanced-topics/quantization).\n",
"To learn more about quantization, you can refer to this [documentation section](../explanations/quantization.md).\n",
"\n",
"In the context of FHE, input data must be represented exclusively as integers, requiring the use of quantization. As a result:\n",
"* For linear models, quantization is performed after training by finding the best integer weight representations based on input and weight distribution. Users can manually set the n_bits parameter. Linear FHE models can handle large integers up to 50 bits, enabling the quantization of inputs and weights over many bits (e.g., 16) while handling data-sets with many features (e.g., 1000). Thus, they typically exhibit minimal loss, resulting in similar performance scores (e.g., R2 score) to float and quantized models.\n",
Expand All @@ -111,7 +111,7 @@
"\n",
"* Built-in neural networks use several linear layers and Quantization Aware Training. The maximum accumulator bit-width is controlled by the number of weights and activation bits, as well as a pruning factor. This factor is automatically determined based on the desired accumulator bit-width and a multiplier factor can be optionally specified.\n",
"\n",
"To learn more about the relationship between the maximum bit-width reached within a model, the bits of quantization used, and the number of features in the data-set, please refer to this [page](https://docs.preprod.zama.ai/concrete-ml/main/advanced-topics/pruning.html?highlight=formula#pruning-in-practice)."
"To learn more about the relationship between the maximum bit-width reached within a model, the bits of quantization used, and the number of features in the data-set, please refer to this [documentation section](../explanations/pruning.md#pruning-in-practice)."
]
},
{
Expand Down
Loading

0 comments on commit 718b7c2

Please sign in to comment.