From 77634a9332617e4a9a46cbed524f6010645659be Mon Sep 17 00:00:00 2001 From: Andrei Stoian <95410270+andrei-stoian-zama@users.noreply.github.com> Date: Wed, 27 Sep 2023 17:07:58 +0200 Subject: [PATCH] docs: minor doc fixes for release --- README.md | 20 +- docs/README.md | 27 + docs/built-in-models/linear.md | 30 +- docs/developer-guide/api/README.md | 18 +- .../api/concrete.ml.common.utils.md | 70 +-- ...oncrete.ml.deployment.fhe_client_server.md | 58 +- .../api/concrete.ml.pytest.torch_models.md | 36 ++ .../api/concrete.ml.pytest.utils.md | 158 +++++- ...ncrete.ml.quantization.quantized_module.md | 38 +- ...ete.ml.search_parameters.p_error_search.md | 12 +- .../api/concrete.ml.sklearn.base.md | 531 +++++++++++------- .../api/concrete.ml.sklearn.md | 115 ---- .../api/concrete.ml.torch.compile.md | 48 +- .../api/concrete.ml.torch.hybrid_model.md | 313 ++++++++++- 14 files changed, 1022 insertions(+), 452 deletions(-) diff --git a/README.md b/README.md index 8ac0d6005..1b93e64b4 100644 --- a/README.md +++ b/README.md @@ -117,28 +117,28 @@ Full, comprehensive documentation is available here: [https://docs.zama.ai/concr ## Online demos and tutorials. -Various tutorials are proposed for the [built-in models](docs/built-in-models/ml_examples.md) and for [deep learning](docs/deep-learning/examples.md). In addition, several complete use-cases are explored: +Various tutorials are given for [built-in models](docs/built-in-models/ml_examples.md) and for [deep learning](docs/deep-learning/examples.md) In addition, several complete use-cases are explored: - [Encrypted Large Language Model](use_case_examples/llm/): convert a user-defined part of a Large Language Model for encrypted text generation. Shows the trade-off between quantization and accuracy for text generation and shows how to run the model in FHE. -- [Credit Scoring](use_case_examples/credit_scoring/): predicts the chance of a given loan applicant defaulting on loan repayment while keeping the user's data private. Shows how Concrete ML models easily replace their scikit-learn equivalents +- [Credit Scoring](use_case_examples/credit_scoring/): predict the chance of a given loan applicant defaulting on loan repayment while keeping the user's data private. Shows how Concrete ML models easily replace their scikit-learn equivalents -- [Health diagnosis](use_case_examples/disease_prediction/): based on a patient's symptoms, history and other health factors, gives +- [Health diagnosis](use_case_examples/disease_prediction/): based on a patient's symptoms, history and other health factors, give a diagnosis using FHE to preserve the privacy of the patient. -- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): a notebook, which gives a solution to the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning), and was the subject of a blogpost in [KDnuggets](https://www.kdnuggets.com/2022/08/machine-learning-encrypted-data.html). +- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): solve the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning), and was the subject of a blogpost in [KDnuggets](https://www.kdnuggets.com/2022/08/machine-learning-encrypted-data.html). -- [Sentiment analysis with transformers](use_case_examples/sentiment_analysis_with_transformer): a gradio demo which predicts if a tweet / short message is positive, negative or neutral, with FHE of course! The [live interactive](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis) demo is available on Hugging Face. This [blog post](https://huggingface.co/blog/sentiment-analysis-fhe) explains how this demo works! +- [Sentiment analysis with transformers](use_case_examples/sentiment_analysis_with_transformer): predict if an encrypted tweet / short message is positive, negative or neutral, using FHE. The [live interactive](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis) demo is available on Hugging Face. This [blog post](https://huggingface.co/blog/sentiment-analysis-fhe) explains how this demo works! -- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. +- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): train a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. -- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. +- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. -- [FHE neural network splitting for client/server deployment](use_case_examples/cifar/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy. +- [FHE neural network splitting for client/server deployment](use_case_examples/cifar/cifar_brevitas_with_model_splitting): explains how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy. -- [Encrypted image filtering](use_case_examples/image_filtering): finally, the live demo for our [6-min](https://6min.zama.ai) is available, in the form of a gradio application. We take encrypted images, and apply some filters (for example black-and-white, ridge detection, or your own filter). +- [Encrypted image filtering](use_case_examples/image_filtering): filter encrypted images by applying filters such as black-and-white, ridge detection, or your own filter. -More generally, if you have built awesome projects using Concrete ML, feel free to let us know and we'll link to it! +If you have built awesome projects using Concrete ML, feel free to let us know and we'll link to them! ## Citing Concrete ML diff --git a/docs/README.md b/docs/README.md index f950a48d9..14cc50906 100644 --- a/docs/README.md +++ b/docs/README.md @@ -48,6 +48,33 @@ print(f"Similarity: {(y_pred_fhe == y_pred_clear).mean():.1%}") # Similarity: 100.0% ``` +It is also possible to call encryption, model prediction, and decryption functions separately as follows. +Executing these steps separately is equivalent to calling `predict_proba` on the model instance. + + + +```python +y_proba_fhe = model.predict_proba(X_test[[0]], fhe="execute") + +# Quantize an input (float) +q_input = model.quantize_input(X_test[[0]]) + +# Encrypt the input +q_input_enc = model.fhe_circuit.encrypt(q_input) + +# Execute the linear product in FHE +q_y_enc = model.fhe_circuit.run(q_input_enc) + +# Decrypt the result (integer) +q_y = model.fhe_circuit.decrypt(q_y_enc) + +# De-quantize the result +y0 = model.post_processing(model.dequantize_output(q_y)) + +print("Probability with `predict_proba`: ", y0) +print("Probability with encrypt/run/decrypt calls: ", y_proba_fhe) +``` + This example shows the typical flow of a Concrete ML model: - The model is trained on unencrypted (plaintext) data using scikit-learn. As FHE operates over integers, Concrete ML quantizes the model to use only integers during inference. diff --git a/docs/built-in-models/linear.md b/docs/built-in-models/linear.md index aa0dcf478..f68799d11 100644 --- a/docs/built-in-models/linear.md +++ b/docs/built-in-models/linear.md @@ -19,6 +19,8 @@ Using these models in FHE is extremely similar to what can be done with scikit-l Models are also compatible with some of scikit-learn's main workflows, such as `Pipeline()` and `GridSearch()`. +It is possible to convert an already trained scikit-learn linear model to a Concrete ML one by using the [`from_sklearn_model`](../developer-guide/api/concrete.ml.sklearn.base.md#classmethod-from_sklearn_model) method. See [below for an example](#loading-a-pre-trained-model). This functionality is only available for linear models. + ## Quantization parameters The `n_bits` parameter controls the bit-width of the inputs and weights of the linear models. When non-linear mapping is applied by the model, such as _exp_ or _sigmoid_, Concrete ML applies it on the client-side, on clear-text values that are the decrypted output of the linear part of the model. Thus, Linear Models do not use table lookups, and can, therefore, use high precision integers for weight and inputs. @@ -27,7 +29,7 @@ The `n_bits` parameter can be set to `8` or more bits for models with up to `300 ## Example -Here is an example below of how to use a LogisticRegression model in FHE on a simple data-set for classification. A more complete example can be found in the [LogisticRegression notebook](ml_examples.md). +The following snippet gives an example about training a LogisticRegression model on a simple data-set followed by inference on encrypted data with FHE. A more complete example can be found in the [LogisticRegression notebook](ml_examples.md). ```python import numpy @@ -80,3 +82,29 @@ We can then plot the decision boundary of the classifier and compare those resul ![Sklearn model decision boundaries](../figures/logistic_regression_clear.png) ![FHE model decision boundarires](../figures/logistic_regression_fhe.png) The overall accuracy scores are identical (93%) between the scikit-learn model (executed in the clear) and the Concrete ML one (executed in FHE). In fact, quantization has little impact on the decision boundaries, as linear models are able to consider large precision numbers when quantizing inputs and weights in Concrete ML. Additionally, as the linear models do not use PBS, the FHE computations are always exact. This means that the FHE predictions are always identical to the quantized clear ones. + +## Loading a pre-trained model + +An alternative to the example above is to train a scikit-learn model in a separate step and then to convert it to Concrete ML. + + + +``` +from sklearn.linear_model import LogisticRegression as SKlearnLogisticRegression + +# Instantiate the model: +model = SKlearnLogisticRegression() + +# Fit the model: +model.fit(X_train, y_train) + +cml_model = LogisticRegression.from_sklearn_model(model, X_train, n_bits=8) + +# Compile the model: +cml_model.compile(X_train) + +# Perform the inference in FHE: +y_pred_fhe = cml_model.predict(X_test, fhe="execute") + + +``` diff --git a/docs/developer-guide/api/README.md b/docs/developer-guide/api/README.md index b978180f2..7f586a883 100644 --- a/docs/developer-guide/api/README.md +++ b/docs/developer-guide/api/README.md @@ -88,6 +88,7 @@ - [`torch_models.NetWithConstantsFoldedBeforeOps`](./concrete.ml.pytest.torch_models.md#class-netwithconstantsfoldedbeforeops): Torch QAT model that does not quantize the inputs. - [`torch_models.NetWithLoops`](./concrete.ml.pytest.torch_models.md#class-netwithloops): Torch model, where we reuse some elements in a loop. - [`torch_models.PaddingNet`](./concrete.ml.pytest.torch_models.md#class-paddingnet): Torch QAT model that applies various padding patterns. +- [`torch_models.PartialQATModel`](./concrete.ml.pytest.torch_models.md#class-partialqatmodel): A model with a QAT Module. - [`torch_models.QATTestModule`](./concrete.ml.pytest.torch_models.md#class-qattestmodule): Torch model that implements a simple non-uniform quantizer. - [`torch_models.QuantCustomModel`](./concrete.ml.pytest.torch_models.md#class-quantcustommodel): A small quantized network with Brevitas, trained on make_classification. - [`torch_models.ShapeOperationsNet`](./concrete.ml.pytest.torch_models.md#class-shapeoperationsnet): Torch QAT model that reshapes the input. @@ -203,6 +204,8 @@ - [`xgb.XGBRegressor`](./concrete.ml.sklearn.xgb.md#class-xgbregressor): Implements the XGBoost regressor. - [`hybrid_model.HybridFHEMode`](./concrete.ml.torch.hybrid_model.md#class-hybridfhemode): Simple enum for different modes of execution of HybridModel. - [`hybrid_model.HybridFHEModel`](./concrete.ml.torch.hybrid_model.md#class-hybridfhemodel): Convert a model to a hybrid model. +- [`hybrid_model.HybridFHEModelServer`](./concrete.ml.torch.hybrid_model.md#class-hybridfhemodelserver): Hybrid FHE Model Server. +- [`hybrid_model.LoggerStub`](./concrete.ml.torch.hybrid_model.md#class-loggerstub): Placeholder type for a typical logger like the one from loguru. - [`hybrid_model.RemoteModule`](./concrete.ml.torch.hybrid_model.md#class-remotemodule): A wrapper class for the modules to be done remotely with FHE. - [`numpy_module.NumpyModule`](./concrete.ml.torch.numpy_module.md#class-numpymodule): General interface to transform a torch.nn.Module to numpy module. @@ -240,7 +243,6 @@ - [`utils.is_regressor_or_partial_regressor`](./concrete.ml.common.utils.md#function-is_regressor_or_partial_regressor): Indicate if the model class represents a regressor. - [`utils.manage_parameters_for_pbs_errors`](./concrete.ml.common.utils.md#function-manage_parameters_for_pbs_errors): Return (p_error, global_p_error) that we want to give to Concrete. - [`utils.replace_invalid_arg_name_chars`](./concrete.ml.common.utils.md#function-replace_invalid_arg_name_chars): Sanitize arg_name, replacing invalid chars by \_. -- [`utils.set_multi_parameter_in_configuration`](./concrete.ml.common.utils.md#function-set_multi_parameter_in_configuration): Build a Configuration instance with multi-parameter strategy, unless one is already given. - [`utils.to_tuple`](./concrete.ml.common.utils.md#function-to_tuple): Make the input a tuple if it is not already the case. - [`deploy_to_aws.create_instance`](./concrete.ml.deployment.deploy_to_aws.md#function-create_instance): Create a EC2 instance. - [`deploy_to_aws.delete_security_group`](./concrete.ml.deployment.deploy_to_aws.md#function-delete_security_group): Terminate a AWS EC2 instance. @@ -252,6 +254,7 @@ - [`deploy_to_docker.delete_image`](./concrete.ml.deployment.deploy_to_docker.md#function-delete_image): Delete a Docker image. - [`deploy_to_docker.main`](./concrete.ml.deployment.deploy_to_docker.md#function-main): Deploy function. - [`deploy_to_docker.stop_container`](./concrete.ml.deployment.deploy_to_docker.md#function-stop_container): Kill all containers that use a given image. +- [`fhe_client_server.check_concrete_versions`](./concrete.ml.deployment.fhe_client_server.md#function-check_concrete_versions): Check that current versions match the ones used in development. - [`utils.filter_logs`](./concrete.ml.deployment.utils.md#function-filter_logs): Filter logs based on previous logs. - [`utils.is_connection_available`](./concrete.ml.deployment.utils.md#function-is_connection_available): Check if ssh connection is available. - [`utils.wait_for_connection_to_be_available`](./concrete.ml.deployment.utils.md#function-wait_for_connection_to_be_available): Wait for connection to be available. @@ -342,18 +345,17 @@ - [`ops_impl.onnx_func_raw_args`](./concrete.ml.onnx.ops_impl.md#function-onnx_func_raw_args): Decorate a numpy onnx function to flag the raw/non quantized inputs. - [`utils.check_serialization`](./concrete.ml.pytest.utils.md#function-check_serialization): Check that the given object can properly be serialized. - [`utils.data_calibration_processing`](./concrete.ml.pytest.utils.md#function-data_calibration_processing): Reduce size of the given data-set. -- [`utils.get_random_extract_of_sklearn_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_random_extract_of_sklearn_models_and_datasets): Return a random sublist of sklearn_models_and_datasets. +- [`utils.get_sklearn_all_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_all_models_and_datasets): Get the pytest parameters to use for testing all models available in Concrete ML. +- [`utils.get_sklearn_linear_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_linear_models_and_datasets): Get the pytest parameters to use for testing linear models. +- [`utils.get_sklearn_neighbors_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_neighbors_models_and_datasets): Get the pytest parameters to use for testing neighbor models. +- [`utils.get_sklearn_neural_net_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_neural_net_models_and_datasets): Get the pytest parameters to use for testing neural network models. +- [`utils.get_sklearn_tree_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_tree_models_and_datasets): Get the pytest parameters to use for testing tree-based models. - [`utils.instantiate_model_generic`](./concrete.ml.pytest.utils.md#function-instantiate_model_generic): Instantiate any Concrete ML model type. - [`utils.load_torch_model`](./concrete.ml.pytest.utils.md#function-load_torch_model): Load an object saved with torch.save() from a file or dict. - [`utils.values_are_equal`](./concrete.ml.pytest.utils.md#function-values_are_equal): Indicate if two values are equal. - [`post_training.get_n_bits_dict`](./concrete.ml.quantization.post_training.md#function-get_n_bits_dict): Convert the n_bits parameter into a proper dictionary. - [`quantizers.fill_from_kwargs`](./concrete.ml.quantization.quantizers.md#function-fill_from_kwargs): Fill a parameter set structure from kwargs parameters. - [`p_error_search.compile_and_simulated_fhe_inference`](./concrete.ml.search_parameters.p_error_search.md#function-compile_and_simulated_fhe_inference): Get the quantized module of a given model in FHE, simulated or not. -- [`sklearn.get_sklearn_linear_models`](./concrete.ml.sklearn.md#function-get_sklearn_linear_models): Return the list of available linear models in Concrete ML. -- [`sklearn.get_sklearn_models`](./concrete.ml.sklearn.md#function-get_sklearn_models): Return the list of available models in Concrete ML. -- [`sklearn.get_sklearn_neighbors_models`](./concrete.ml.sklearn.md#function-get_sklearn_neighbors_models): Return the list of available neighbor models in Concrete ML. -- [`sklearn.get_sklearn_neural_net_models`](./concrete.ml.sklearn.md#function-get_sklearn_neural_net_models): Return the list of available neural net models in Concrete ML. -- [`sklearn.get_sklearn_tree_models`](./concrete.ml.sklearn.md#function-get_sklearn_tree_models): Return the list of available tree models in Concrete ML. - [`tree_to_numpy.add_transpose_after_last_node`](./concrete.ml.sklearn.tree_to_numpy.md#function-add_transpose_after_last_node): Add transpose after last node. - [`tree_to_numpy.get_onnx_model`](./concrete.ml.sklearn.tree_to_numpy.md#function-get_onnx_model): Create ONNX model with Hummingbird convert method. - [`tree_to_numpy.preprocess_tree_predictions`](./concrete.ml.sklearn.tree_to_numpy.md#function-preprocess_tree_predictions): Apply post-processing from the graph. @@ -366,5 +368,7 @@ - [`compile.compile_onnx_model`](./concrete.ml.torch.compile.md#function-compile_onnx_model): Compile a torch module into an FHE equivalent. - [`compile.compile_torch_model`](./concrete.ml.torch.compile.md#function-compile_torch_model): Compile a torch module into an FHE equivalent. - [`compile.convert_torch_tensor_or_numpy_array_to_numpy_array`](./concrete.ml.torch.compile.md#function-convert_torch_tensor_or_numpy_array_to_numpy_array): Convert a torch tensor or a numpy array to a numpy array. +- [`compile.has_any_qnn_layers`](./concrete.ml.torch.compile.md#function-has_any_qnn_layers): Check if a torch model has QNN layers. - [`hybrid_model.convert_conv1d_to_linear`](./concrete.ml.torch.hybrid_model.md#function-convert_conv1d_to_linear): Convert all Conv1D layers in a module or a Conv1D layer itself to nn.Linear. - [`hybrid_model.tuple_to_underscore_str`](./concrete.ml.torch.hybrid_model.md#function-tuple_to_underscore_str): Convert a tuple to a string representation. +- [`hybrid_model.underscore_str_to_tuple`](./concrete.ml.torch.hybrid_model.md#function-underscore_str_to_tuple): Convert a a string representation of a tuple to a tuple. diff --git a/docs/developer-guide/api/concrete.ml.common.utils.md b/docs/developer-guide/api/concrete.ml.common.utils.md index d8efaff26..8e23d06c2 100644 --- a/docs/developer-guide/api/concrete.ml.common.utils.md +++ b/docs/developer-guide/api/concrete.ml.common.utils.md @@ -17,7 +17,7 @@ Utils that can be re-used by other pieces of code in the module. ______________________________________________________________________ - + ## function `replace_invalid_arg_name_chars` @@ -39,7 +39,7 @@ This does not check that the starting character of arg_name is valid. ______________________________________________________________________ - + ## function `generate_proxy_function` @@ -65,7 +65,7 @@ This returns a runtime compiled function with the sanitized argument names passe ______________________________________________________________________ - + ## function `get_onnx_opset_version` @@ -85,7 +85,7 @@ Return the ONNX opset_version. ______________________________________________________________________ - + ## function `manage_parameters_for_pbs_errors` @@ -122,7 +122,7 @@ Note that global_p_error is currently set to 0 in the FHE simulation mode. ______________________________________________________________________ - + ## function `check_there_is_no_p_error_options_in_configuration` @@ -140,7 +140,7 @@ It would be dangerous, since we set them in direct arguments in our calls to Con ______________________________________________________________________ - + ## function `get_model_class` @@ -159,7 +159,7 @@ The model's class. ______________________________________________________________________ - + ## function `is_model_class_in_a_list` @@ -179,7 +179,7 @@ If the model's class is in the list or not. ______________________________________________________________________ - + ## function `get_model_name` @@ -198,7 +198,7 @@ the model's name. ______________________________________________________________________ - + ## function `is_classifier_or_partial_classifier` @@ -218,7 +218,7 @@ Indicate if the model class represents a classifier. ______________________________________________________________________ - + ## function `is_regressor_or_partial_regressor` @@ -238,7 +238,7 @@ Indicate if the model class represents a regressor. ______________________________________________________________________ - + ## function `is_pandas_dataframe` @@ -260,7 +260,7 @@ This function is inspired from Scikit-Learn's test validation tools and avoids t ______________________________________________________________________ - + ## function `is_pandas_series` @@ -282,7 +282,7 @@ This function is inspired from Scikit-Learn's test validation tools and avoids t ______________________________________________________________________ - + ## function `is_pandas_type` @@ -302,7 +302,7 @@ Indicate if the input container is a Pandas DataFrame or Series. ______________________________________________________________________ - + ## function `check_dtype_and_cast` @@ -334,7 +334,7 @@ If values types don't match with any supported type or the expected dtype, raise ______________________________________________________________________ - + ## function `compute_bits_precision` @@ -354,7 +354,7 @@ Compute the number of bits required to represent x. ______________________________________________________________________ - + ## function `is_brevitas_model` @@ -374,7 +374,7 @@ Check if a model is a Brevitas type. ______________________________________________________________________ - + ## function `to_tuple` @@ -394,7 +394,7 @@ Make the input a tuple if it is not already the case. ______________________________________________________________________ - + ## function `all_values_are_integers` @@ -414,7 +414,7 @@ Indicate if all unpacked values are of a supported integer dtype. ______________________________________________________________________ - + ## function `all_values_are_floats` @@ -434,7 +434,7 @@ Indicate if all unpacked values are of a supported float dtype. ______________________________________________________________________ - + ## function `all_values_are_of_dtype` @@ -455,33 +455,7 @@ Indicate if all unpacked values are of the specified dtype(s). ______________________________________________________________________ - - -## function `set_multi_parameter_in_configuration` - -```python -set_multi_parameter_in_configuration( - configuration: Optional[Configuration], - **kwargs -) -``` - -Build a Configuration instance with multi-parameter strategy, unless one is already given. - -If the given Configuration instance is not None and the parameter strategy is set to MONO, a warning is raised in order to make sure the user did it on purpose. - -**Args:** - -- `configuration` (Optional\[Configuration\]): The configuration to consider. -- `**kwargs`: Additional parameters to use for instantiating a new Configuration instance, if configuration is None. - -**Returns:** - -- `configuration` (Configuration): A configuration with multi-parameter strategy. - -______________________________________________________________________ - - + ## function `force_mono_parameter_in_configuration` @@ -507,7 +481,7 @@ If the given Configuration instance is None, build a new instance with mono-para ______________________________________________________________________ - + ## class `FheMode` diff --git a/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md b/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md index 983e13748..65f2aa8d8 100644 --- a/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md +++ b/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md @@ -14,11 +14,33 @@ ______________________________________________________________________ +## function `check_concrete_versions` + +```python +check_concrete_versions(zip_path: Path) +``` + +Check that current versions match the ones used in development. + +This function loads the version JSON file found in client.zip or server.zip files and then checks that current package versions (Concrete Python, Concrete ML) as well as the Python current version all match the ones that are currently installed. + +**Args:** + +- `zip_path` (Path): The path to the client or server zip file that contains the version.json file to check. + +**Raises:** + +- `ValueError`: If at least one version mismatch is found. + +______________________________________________________________________ + + + ## class `FHEModelServer` Server API to load and run the FHE circuit. - + ### method `__init__` @@ -34,7 +56,7 @@ Initialize the FHE API. ______________________________________________________________________ - + ### method `load` @@ -44,13 +66,9 @@ load() Load the circuit. -**Raises:** - -- `ValueError`: if mismatch in versions between serialized file and runtime - ______________________________________________________________________ - + ### method `run` @@ -74,13 +92,13 @@ Run the model on the server over encrypted data. ______________________________________________________________________ - + ## class `FHEModelDev` Dev API to save the model and then load and run the FHE circuit. - + ### method `__init__` @@ -97,7 +115,7 @@ Initialize the FHE API. ______________________________________________________________________ - + ### method `save` @@ -117,13 +135,13 @@ Export all needed artifacts for the client and server. ______________________________________________________________________ - + ## class `FHEModelClient` Client API to encrypt and decrypt FHE data. - + ### method `__init__` @@ -140,7 +158,7 @@ Initialize the FHE API. ______________________________________________________________________ - + ### method `deserialize_decrypt` @@ -160,7 +178,7 @@ Deserialize and decrypt the values. ______________________________________________________________________ - + ### method `deserialize_decrypt_dequantize` @@ -182,7 +200,7 @@ Deserialize, decrypt and de-quantize the values. ______________________________________________________________________ - + ### method `generate_private_and_evaluation_keys` @@ -198,7 +216,7 @@ Generate the private and evaluation keys. ______________________________________________________________________ - + ### method `get_serialized_evaluation_keys` @@ -214,7 +232,7 @@ Get the serialized evaluation keys. ______________________________________________________________________ - + ### method `load` @@ -224,13 +242,9 @@ load() Load the quantizers along with the FHE specs. -**Raises:** - -- `ValueError`: if mismatch in versions between serialized file and runtime - ______________________________________________________________________ - + ### method `quantize_encrypt_serialize` diff --git a/docs/developer-guide/api/concrete.ml.pytest.torch_models.md b/docs/developer-guide/api/concrete.ml.pytest.torch_models.md index a2489e0c4..1c39891ad 100644 --- a/docs/developer-guide/api/concrete.ml.pytest.torch_models.md +++ b/docs/developer-guide/api/concrete.ml.pytest.torch_models.md @@ -1326,3 +1326,39 @@ Forward pass. **Returns:** - `torch.tensor`: Output of the network. + +______________________________________________________________________ + + + +## class `PartialQATModel` + +A model with a QAT Module. + + + +### method `__init__` + +```python +__init__(input_shape: int, output_shape: int, n_bits: int) +``` + +______________________________________________________________________ + + + +### method `forward` + +```python +forward(x) +``` + +Forward pass. + +**Args:** + +- `x` (torch.tensor): The input of the model. + +**Returns:** + +- `torch.tensor`: Output of the network. diff --git a/docs/developer-guide/api/concrete.ml.pytest.utils.md b/docs/developer-guide/api/concrete.ml.pytest.utils.md index eab94bada..e0222ef76 100644 --- a/docs/developer-guide/api/concrete.ml.pytest.utils.md +++ b/docs/developer-guide/api/concrete.ml.pytest.utils.md @@ -8,28 +8,162 @@ Common functions or lists for test files, which can't be put in fixtures. ## **Global Variables** -- **sklearn_models_and_datasets** +- **MODELS_AND_DATASETS** +- **UNIQUE_MODELS_AND_DATASETS** ______________________________________________________________________ - + -## function `get_random_extract_of_sklearn_models_and_datasets` +## function `get_sklearn_linear_models_and_datasets` ```python -get_random_extract_of_sklearn_models_and_datasets() +get_sklearn_linear_models_and_datasets( + regressor: bool = True, + classifier: bool = True, + unique_models: bool = False, + select: Optional[str, List[str]] = None, + ignore: Optional[str, List[str]] = None +) → List ``` -Return a random sublist of sklearn_models_and_datasets. +Get the pytest parameters to use for testing linear models. -The sublist contains exactly one model of each kind. +**Args:** + +- `regressor` (bool): If regressors should be selected. +- `classifier` (bool): If classifiers should be selected. +- `unique_models` (bool): If each models should be represented only once. +- `select` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None. +- `ignore` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None. + +**Returns:** + +- `List`: The pytest parameters to use for testing linear models. + +______________________________________________________________________ + + + +## function `get_sklearn_tree_models_and_datasets` + +```python +get_sklearn_tree_models_and_datasets( + regressor: bool = True, + classifier: bool = True, + unique_models: bool = False, + select: Optional[str, List[str]] = None, + ignore: Optional[str, List[str]] = None +) → List +``` + +Get the pytest parameters to use for testing tree-based models. + +**Args:** + +- `regressor` (bool): If regressors should be selected. +- `classifier` (bool): If classifiers should be selected. +- `unique_models` (bool): If each models should be represented only once. +- `select` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None. +- `ignore` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None. + +**Returns:** + +- `List`: The pytest parameters to use for testing tree-based models. + +______________________________________________________________________ + + + +## function `get_sklearn_neural_net_models_and_datasets` + +```python +get_sklearn_neural_net_models_and_datasets( + regressor: bool = True, + classifier: bool = True, + unique_models: bool = False, + select: Optional[str, List[str]] = None, + ignore: Optional[str, List[str]] = None +) → List +``` + +Get the pytest parameters to use for testing neural network models. + +**Args:** + +- `regressor` (bool): If regressors should be selected. +- `classifier` (bool): If classifiers should be selected. +- `unique_models` (bool): If each models should be represented only once. +- `select` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None. +- `ignore` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None. **Returns:** -the sublist + +- `List`: The pytest parameters to use for testing neural network models. + +______________________________________________________________________ + + + +## function `get_sklearn_neighbors_models_and_datasets` + +```python +get_sklearn_neighbors_models_and_datasets( + regressor: bool = True, + classifier: bool = True, + unique_models: bool = False, + select: Optional[str, List[str]] = None, + ignore: Optional[str, List[str]] = None +) → List +``` + +Get the pytest parameters to use for testing neighbor models. + +**Args:** + +- `regressor` (bool): If regressors should be selected. +- `classifier` (bool): If classifiers should be selected. +- `unique_models` (bool): If each models should be represented only once. +- `select` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None. +- `ignore` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None. + +**Returns:** + +- `List`: The pytest parameters to use for testing neighbor models. + +______________________________________________________________________ + + + +## function `get_sklearn_all_models_and_datasets` + +```python +get_sklearn_all_models_and_datasets( + regressor: bool = True, + classifier: bool = True, + unique_models: bool = False, + select: Optional[str, List[str]] = None, + ignore: Optional[str, List[str]] = None +) → List +``` + +Get the pytest parameters to use for testing all models available in Concrete ML. + +**Args:** + +- `regressor` (bool): If regressors should be selected. +- `classifier` (bool): If classifiers should be selected. +- `unique_models` (bool): If each models should be represented only once. +- `select` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None. +- `ignore` (Optional\[Union\[str, List\[str\]\]\]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None. + +**Returns:** + +- `List`: The pytest parameters to use for testing all models available in Concrete ML. ______________________________________________________________________ - + ## function `instantiate_model_generic` @@ -52,7 +186,7 @@ Instantiate any Concrete ML model type. ______________________________________________________________________ - + ## function `data_calibration_processing` @@ -78,7 +212,7 @@ Reduce size of the given data-set. ______________________________________________________________________ - + ## function `load_torch_model` @@ -106,7 +240,7 @@ Load an object saved with torch.save() from a file or dict. ______________________________________________________________________ - + ## function `values_are_equal` @@ -129,7 +263,7 @@ This method takes into account objects of type None, numpy.ndarray, numpy.floati ______________________________________________________________________ - + ## function `check_serialization` diff --git a/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md b/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md index 784256d96..a12d24c9a 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md +++ b/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md @@ -14,13 +14,13 @@ QuantizedModule API. ______________________________________________________________________ - + ## class `QuantizedModule` Inference for a quantized model. - + ### method `__init__` @@ -67,7 +67,7 @@ Get the post-processing parameters. ______________________________________________________________________ - + ### method `bitwidth_and_range_report` @@ -83,7 +83,7 @@ Report the ranges and bit-widths for layers that mix encrypted integer values. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -99,7 +99,7 @@ Check if the quantized module is compiled. ______________________________________________________________________ - + ### method `compile` @@ -111,7 +111,8 @@ compile( show_mlir: bool = False, p_error: Optional[float] = None, global_p_error: Optional[float] = None, - verbose: bool = False + verbose: bool = False, + inputs_encryption_status: Optional[Sequence[str]] = None ) → Circuit ``` @@ -126,14 +127,19 @@ Compile the module's forward function. - `p_error` (Optional\[float\]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value. - `global_p_error` (Optional\[float\]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during simulation, meaning the probability is currently set to 0. Default to None, which sets this error to a default value. - `verbose` (bool): Indicate if compilation information should be printed during compilation. Default to False. +- `inputs_encryption_status` (Optional\[Sequence\[str\]\]): encryption status ('clear', 'encrypted') for each input. **Returns:** - `Circuit`: The compiled Circuit. +**Raises:** + +- `ValueError`: if inputs_encryption_status does not match with the parameters of the quantized module + ______________________________________________________________________ - + ### method `dequantize_output` @@ -153,7 +159,7 @@ Take the last layer q_out and use its de-quant function. ______________________________________________________________________ - + ### method `dump` @@ -169,7 +175,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -185,7 +191,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -201,7 +207,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `forward` @@ -229,7 +235,7 @@ This method executes the forward pass in the clear, with simulation or in FHE. I ______________________________________________________________________ - + ### method `load_dict` @@ -249,7 +255,7 @@ Load itself from a string. ______________________________________________________________________ - + ### method `post_processing` @@ -271,7 +277,7 @@ For quantized modules, there is no post-processing step but the method is kept t ______________________________________________________________________ - + ### method `quantize_input` @@ -291,7 +297,7 @@ Take the inputs in fp32 and quantize it using the learned quantization parameter ______________________________________________________________________ - + ### method `quantized_forward` @@ -315,7 +321,7 @@ Forward function for the FHE circuit. ______________________________________________________________________ - + ### method `set_inputs_quantization_parameters` diff --git a/docs/developer-guide/api/concrete.ml.search_parameters.p_error_search.md b/docs/developer-guide/api/concrete.ml.search_parameters.p_error_search.md index 4031e284d..7cc6cd405 100644 --- a/docs/developer-guide/api/concrete.ml.search_parameters.p_error_search.md +++ b/docs/developer-guide/api/concrete.ml.search_parameters.p_error_search.md @@ -43,7 +43,7 @@ If we don't reach the convergence, a user warning is raised. ______________________________________________________________________ - + ## function `compile_and_simulated_fhe_inference` @@ -91,13 +91,13 @@ Supported models are: ______________________________________________________________________ - + ## class `BinarySearch` Class for `p_error` hyper-parameter search for classification and regression tasks. - + ### method `__init__` @@ -147,7 +147,7 @@ __init__( ______________________________________________________________________ - + ### method `eval_match` @@ -174,7 +174,7 @@ Eval the matches. ______________________________________________________________________ - + ### method `reset_history` @@ -186,7 +186,7 @@ Clean history. ______________________________________________________________________ - + ### method `run` diff --git a/docs/developer-guide/api/concrete.ml.sklearn.base.md b/docs/developer-guide/api/concrete.ml.sklearn.base.md index fcf8a8c80..2dd6ad569 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.base.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.base.md @@ -14,7 +14,7 @@ Base classes for all estimators. ______________________________________________________________________ - + ## class `BaseEstimator` @@ -26,7 +26,7 @@ This class does not inherit from sklearn.base.BaseEstimator as it creates some c - `_is_a_public_cml_model` (bool): Private attribute indicating if the class is a public model (as opposed to base or mixin classes). - + ### method `__init__` @@ -84,7 +84,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -100,7 +100,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -116,7 +116,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -136,7 +136,7 @@ Compile the model. **Args:** -- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or s sub-set of it. +- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. - `configuration` (Optional\[Configuration\]): Options to use for compilation. Default to None. - `artifacts` (Optional\[DebugArtifacts\]): Artifacts information about the compilation process to store for debugging. Default to None. - `show_mlir` (bool): Indicate if the MLIR graph should be printed during compilation. Default to False. @@ -150,7 +150,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -172,7 +172,7 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ### method `dump` @@ -188,7 +188,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -204,7 +204,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -220,7 +220,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -243,7 +243,7 @@ The fitted estimator. ______________________________________________________________________ - + ### method `fit_benchmark` @@ -270,7 +270,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -292,7 +292,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -312,7 +312,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -336,7 +336,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -360,7 +360,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -382,7 +382,7 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ## class `BaseClassifier` @@ -390,7 +390,7 @@ Base class for linear and tree-based classifiers in Concrete ML. This class inherits from BaseEstimator and modifies some of its methods in order to align them with classifier behaviors. This notably include applying a sigmoid/softmax post-processing to the predicted values as well as handling a mapping of classes in case they are not ordered. - + ### method `__init__` @@ -472,7 +472,7 @@ Using this attribute is deprecated. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -488,7 +488,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -504,7 +504,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -524,7 +524,7 @@ Compile the model. **Args:** -- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or s sub-set of it. +- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. - `configuration` (Optional\[Configuration\]): Options to use for compilation. Default to None. - `artifacts` (Optional\[DebugArtifacts\]): Artifacts information about the compilation process to store for debugging. Default to None. - `show_mlir` (bool): Indicate if the MLIR graph should be printed during compilation. Default to False. @@ -538,7 +538,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -560,7 +560,7 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ### method `dump` @@ -576,7 +576,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -592,7 +592,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -608,7 +608,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -618,7 +618,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -645,7 +645,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -667,7 +667,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -687,7 +687,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -697,7 +697,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -710,7 +710,7 @@ predict( ______________________________________________________________________ - + ### method `predict_proba` @@ -734,7 +734,7 @@ Predict class probabilities. ______________________________________________________________________ - + ### method `quantize_input` @@ -756,13 +756,13 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ## class `QuantizedTorchEstimatorMixin` Mixin that provides quantization for a torch module and follows the Estimator API. - + ### method `__init__` @@ -838,7 +838,7 @@ Get the output quantizers. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -854,7 +854,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -870,7 +870,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -888,7 +888,7 @@ compile( ______________________________________________________________________ - + ### method `dequantize_output` @@ -898,7 +898,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -914,7 +914,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -930,7 +930,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -946,7 +946,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -971,7 +971,7 @@ The fitted estimator. ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1002,7 +1002,7 @@ The Concrete ML and equivalent skorch fitted estimators. ______________________________________________________________________ - + ### method `get_params` @@ -1024,7 +1024,7 @@ This method is overloaded in order to make sure that auto-computed parameters ar ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1034,7 +1034,7 @@ get_sklearn_params(deep: 'bool' = True) → Dict ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1054,7 +1054,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1064,7 +1064,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1088,7 +1088,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `prune` @@ -1116,7 +1116,7 @@ A new pruned copy of the Neural Network model. ______________________________________________________________________ - + ### method `quantize_input` @@ -1126,7 +1126,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `BaseTreeEstimatorMixin` @@ -1134,7 +1134,7 @@ Mixin class for tree-based estimators. This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods. - + ### method `__init__` @@ -1194,7 +1194,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -1210,7 +1210,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -1226,7 +1226,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -1236,7 +1236,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -1246,7 +1246,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -1262,7 +1262,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -1278,7 +1278,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -1294,7 +1294,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -1304,7 +1304,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1331,7 +1331,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1353,7 +1353,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1373,7 +1373,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1383,7 +1383,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1396,7 +1396,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` @@ -1406,7 +1406,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `BaseTreeRegressorMixin` @@ -1414,7 +1414,7 @@ Mixin class for tree-based regressors. This class is used to create a tree-based regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's `score` method for regressors. - + ### method `__init__` @@ -1474,7 +1474,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -1490,7 +1490,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -1506,7 +1506,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -1516,7 +1516,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -1526,7 +1526,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -1542,7 +1542,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -1558,7 +1558,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -1574,7 +1574,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -1584,7 +1584,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1611,7 +1611,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1633,7 +1633,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1653,7 +1653,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1663,7 +1663,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1676,7 +1676,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` @@ -1686,7 +1686,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `BaseTreeClassifierMixin` @@ -1696,7 +1696,7 @@ This class is used to create a tree-based classifier class that inherits from sk Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows. - + ### method `__init__` @@ -1780,7 +1780,7 @@ Using this attribute is deprecated. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -1796,7 +1796,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -1812,7 +1812,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -1822,7 +1822,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -1832,7 +1832,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -1848,7 +1848,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -1864,7 +1864,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -1880,7 +1880,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -1890,7 +1890,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1917,7 +1917,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1939,7 +1939,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1959,7 +1959,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1969,7 +1969,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1982,7 +1982,7 @@ predict( ______________________________________________________________________ - + ### method `predict_proba` @@ -2006,7 +2006,7 @@ Predict class probabilities. ______________________________________________________________________ - + ### method `quantize_input` @@ -2016,7 +2016,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnLinearModelMixin` @@ -2024,7 +2024,7 @@ A Mixin class for sklearn linear models with FHE. This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods. - + ### method `__init__` @@ -2086,7 +2086,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -2102,7 +2102,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -2118,17 +2118,41 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` ```python -compile(*args, **kwargs) → Circuit +compile( + X: 'Data', + configuration: 'Optional[Configuration]' = None, + artifacts: 'Optional[DebugArtifacts]' = None, + show_mlir: 'bool' = False, + p_error: 'Optional[float]' = None, + global_p_error: 'Optional[float]' = None, + verbose: 'bool' = False +) → Circuit ``` +Compile the model. + +**Args:** + +- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. +- `configuration` (Optional\[Configuration\]): Options to use for compilation. Default to None. +- `artifacts` (Optional\[DebugArtifacts\]): Artifacts information about the compilation process to store for debugging. Default to None. +- `show_mlir` (bool): Indicate if the MLIR graph should be printed during compilation. Default to False. +- `p_error` (Optional\[float\]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value. +- `global_p_error` (Optional\[float\]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value. +- `verbose` (bool): Indicate if compilation information should be printed during compilation. Default to False. + +**Returns:** + +- `Circuit`: The compiled Circuit. + ______________________________________________________________________ - + ### method `dequantize_output` @@ -2138,7 +2162,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -2154,7 +2178,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -2170,7 +2194,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -2186,7 +2210,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -2196,7 +2220,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -2223,7 +2247,34 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + + +### classmethod `from_sklearn_model` + +```python +from_sklearn_model( + sklearn_model: 'BaseEstimator', + X: 'Data', + n_bits: 'Union[int, Dict[str, int]]' = 8 +) +``` + +Build a FHE-compliant model using a fitted scikit-learn model. + +**Args:** + +- `sklearn_model` (sklearn.base.BaseEstimator): The fitted scikit-learn model to convert. +- `X` (Data): A representative set of input values used for computing quantization parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. +- `n_bits` (int, Dict\[str, int\]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: + \- op_inputs : number of bits to quantize the input values + \- op_weights: number of bits to quantize the learned parameters Default to 8. + +**Returns:** +The FHE-compliant fitted model. + +______________________________________________________________________ + + ### method `get_sklearn_params` @@ -2245,7 +2296,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -2265,7 +2316,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -2289,7 +2340,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -2313,7 +2364,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -2323,7 +2374,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnLinearRegressorMixin` @@ -2331,7 +2382,7 @@ A Mixin class for sklearn linear regressors with FHE. This class is used to create a linear regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's `score` method for regressors. - + ### method `__init__` @@ -2393,7 +2444,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -2409,7 +2460,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -2425,17 +2476,41 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` ```python -compile(*args, **kwargs) → Circuit +compile( + X: 'Data', + configuration: 'Optional[Configuration]' = None, + artifacts: 'Optional[DebugArtifacts]' = None, + show_mlir: 'bool' = False, + p_error: 'Optional[float]' = None, + global_p_error: 'Optional[float]' = None, + verbose: 'bool' = False +) → Circuit ``` +Compile the model. + +**Args:** + +- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. +- `configuration` (Optional\[Configuration\]): Options to use for compilation. Default to None. +- `artifacts` (Optional\[DebugArtifacts\]): Artifacts information about the compilation process to store for debugging. Default to None. +- `show_mlir` (bool): Indicate if the MLIR graph should be printed during compilation. Default to False. +- `p_error` (Optional\[float\]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value. +- `global_p_error` (Optional\[float\]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value. +- `verbose` (bool): Indicate if compilation information should be printed during compilation. Default to False. + +**Returns:** + +- `Circuit`: The compiled Circuit. + ______________________________________________________________________ - + ### method `dequantize_output` @@ -2445,7 +2520,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -2461,7 +2536,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -2477,7 +2552,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -2493,7 +2568,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -2503,7 +2578,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -2530,7 +2605,34 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + + +### classmethod `from_sklearn_model` + +```python +from_sklearn_model( + sklearn_model: 'BaseEstimator', + X: 'Data', + n_bits: 'Union[int, Dict[str, int]]' = 8 +) +``` + +Build a FHE-compliant model using a fitted scikit-learn model. + +**Args:** + +- `sklearn_model` (sklearn.base.BaseEstimator): The fitted scikit-learn model to convert. +- `X` (Data): A representative set of input values used for computing quantization parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. +- `n_bits` (int, Dict\[str, int\]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: + \- op_inputs : number of bits to quantize the input values + \- op_weights: number of bits to quantize the learned parameters Default to 8. + +**Returns:** +The FHE-compliant fitted model. + +______________________________________________________________________ + + ### method `get_sklearn_params` @@ -2552,7 +2654,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -2572,7 +2674,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -2596,7 +2698,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -2620,7 +2722,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -2630,7 +2732,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnLinearClassifierMixin` @@ -2640,7 +2742,7 @@ This class is used to create a linear classifier class that inherits from sklear Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows. - + ### method `__init__` @@ -2726,7 +2828,7 @@ Using this attribute is deprecated. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -2742,7 +2844,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -2758,17 +2860,41 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` ```python -compile(*args, **kwargs) → Circuit +compile( + X: 'Data', + configuration: 'Optional[Configuration]' = None, + artifacts: 'Optional[DebugArtifacts]' = None, + show_mlir: 'bool' = False, + p_error: 'Optional[float]' = None, + global_p_error: 'Optional[float]' = None, + verbose: 'bool' = False +) → Circuit ``` +Compile the model. + +**Args:** + +- `X` (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. +- `configuration` (Optional\[Configuration\]): Options to use for compilation. Default to None. +- `artifacts` (Optional\[DebugArtifacts\]): Artifacts information about the compilation process to store for debugging. Default to None. +- `show_mlir` (bool): Indicate if the MLIR graph should be printed during compilation. Default to False. +- `p_error` (Optional\[float\]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value. +- `global_p_error` (Optional\[float\]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value. +- `verbose` (bool): Indicate if compilation information should be printed during compilation. Default to False. + +**Returns:** + +- `Circuit`: The compiled Circuit. + ______________________________________________________________________ - + ### method `decision_function` @@ -2792,7 +2918,7 @@ Predict confidence scores. ______________________________________________________________________ - + ### method `dequantize_output` @@ -2802,7 +2928,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -2818,7 +2944,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -2834,7 +2960,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -2850,7 +2976,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -2860,7 +2986,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -2887,7 +3013,34 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + + +### classmethod `from_sklearn_model` + +```python +from_sklearn_model( + sklearn_model: 'BaseEstimator', + X: 'Data', + n_bits: 'Union[int, Dict[str, int]]' = 8 +) +``` + +Build a FHE-compliant model using a fitted scikit-learn model. + +**Args:** + +- `sklearn_model` (sklearn.base.BaseEstimator): The fitted scikit-learn model to convert. +- `X` (Data): A representative set of input values used for computing quantization parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it. +- `n_bits` (int, Dict\[str, int\]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: + \- op_inputs : number of bits to quantize the input values + \- op_weights: number of bits to quantize the learned parameters Default to 8. + +**Returns:** +The FHE-compliant fitted model. + +______________________________________________________________________ + + ### method `get_sklearn_params` @@ -2909,7 +3062,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -2929,7 +3082,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -2939,7 +3092,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -2952,7 +3105,7 @@ predict( ______________________________________________________________________ - + ### method `predict_proba` @@ -2965,7 +3118,7 @@ predict_proba( ______________________________________________________________________ - + ### method `quantize_input` @@ -2975,7 +3128,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnKNeighborsMixin` @@ -2983,7 +3136,7 @@ A Mixin class for sklearn KNeighbors models with FHE. This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods. - + ### method `__init__` @@ -3043,7 +3196,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -3059,7 +3212,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -3075,7 +3228,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -3085,7 +3238,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -3095,7 +3248,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -3111,7 +3264,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -3127,7 +3280,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -3143,7 +3296,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -3153,7 +3306,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -3180,7 +3333,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -3202,7 +3355,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -3222,7 +3375,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `majority_vote` @@ -3242,7 +3395,7 @@ Determine the most common class among nearest neighborsfor each query. ______________________________________________________________________ - + ### method `post_processing` @@ -3264,7 +3417,7 @@ For KNN, the de-quantization step is not required. Because \_inference returns t ______________________________________________________________________ - + ### method `predict` @@ -3277,7 +3430,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` @@ -3287,7 +3440,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnKNeighborsClassifierMixin` @@ -3295,7 +3448,7 @@ A Mixin class for sklearn KNeighbors classifiers with FHE. This class is used to create a KNeighbors classifier class that inherits from SklearnKNeighborsMixin and sklearn.base.ClassifierMixin. By inheriting from sklearn.base.ClassifierMixin, it allows this class to be recognized as a classifier." - + ### method `__init__` @@ -3355,7 +3508,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -3371,7 +3524,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -3387,7 +3540,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -3397,7 +3550,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -3407,7 +3560,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -3423,7 +3576,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -3439,7 +3592,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -3455,7 +3608,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -3465,7 +3618,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -3492,7 +3645,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -3514,7 +3667,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -3534,7 +3687,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `majority_vote` @@ -3554,7 +3707,7 @@ Determine the most common class among nearest neighborsfor each query. ______________________________________________________________________ - + ### method `post_processing` @@ -3576,7 +3729,7 @@ For KNN, the de-quantization step is not required. Because \_inference returns t ______________________________________________________________________ - + ### method `predict` @@ -3589,7 +3742,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` diff --git a/docs/developer-guide/api/concrete.ml.sklearn.md b/docs/developer-guide/api/concrete.ml.sklearn.md index 226ebfd16..d5491b26c 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.md @@ -19,118 +19,3 @@ Import sklearn models. - **svm** - **tree** - **xgb** - -______________________________________________________________________ - - - -## function `get_sklearn_models` - -```python -get_sklearn_models() -``` - -Return the list of available models in Concrete ML. - -**Returns:** -the lists of models in Concrete ML - -______________________________________________________________________ - - - -## function `get_sklearn_linear_models` - -```python -get_sklearn_linear_models( - classifier: bool = True, - regressor: bool = True, - str_in_class_name: List[str] = None -) -``` - -Return the list of available linear models in Concrete ML. - -**Args:** - -- `classifier` (bool): whether you want classifiers or not -- `regressor` (bool): whether you want regressors or not -- `str_in_class_name` (List\[str\]): if not None, only return models with the given string or list of strings as a substring in their class name - -**Returns:** -the lists of linear models in Concrete ML - -______________________________________________________________________ - - - -## function `get_sklearn_tree_models` - -```python -get_sklearn_tree_models( - classifier: bool = True, - regressor: bool = True, - str_in_class_name: List[str] = None -) -``` - -Return the list of available tree models in Concrete ML. - -**Args:** - -- `classifier` (bool): whether you want classifiers or not -- `regressor` (bool): whether you want regressors or not -- `str_in_class_name` (List\[str\]): if not None, only return models with the given string or list of strings as a substring in their class name - -**Returns:** -the lists of tree models in Concrete ML - -______________________________________________________________________ - - - -## function `get_sklearn_neural_net_models` - -```python -get_sklearn_neural_net_models( - classifier: bool = True, - regressor: bool = True, - str_in_class_name: List[str] = None -) -``` - -Return the list of available neural net models in Concrete ML. - -**Args:** - -- `classifier` (bool): whether you want classifiers or not -- `regressor` (bool): whether you want regressors or not -- `str_in_class_name` (List\[str\]): if not None, only return models with the given string or list of strings as a substring in their class name - -**Returns:** -the lists of neural net models in Concrete ML - -______________________________________________________________________ - - - -## function `get_sklearn_neighbors_models` - -```python -get_sklearn_neighbors_models( - classifier: bool = True, - regressor: bool = True, - str_in_class_name: List[str] = None -) -``` - -Return the list of available neighbor models in Concrete ML. - -**Args:** - -- `classifier` (bool): whether you want classifiers or not -- `regressor` (bool): whether you want regressors or not -- `str_in_class_name` (List\[str\]): if not None, only return models with the given string or list of strings as a substring in their class name - -**Returns:** -the lists of neighbor models in Concrete ML diff --git a/docs/developer-guide/api/concrete.ml.torch.compile.md b/docs/developer-guide/api/concrete.ml.torch.compile.md index 1a1750a2f..cc1761cd4 100644 --- a/docs/developer-guide/api/concrete.ml.torch.compile.md +++ b/docs/developer-guide/api/concrete.ml.torch.compile.md @@ -15,6 +15,28 @@ ______________________________________________________________________ +## function `has_any_qnn_layers` + +```python +has_any_qnn_layers(torch_model: Module) → bool +``` + +Check if a torch model has QNN layers. + +This is useful to check if a model is a QAT model. + +**Args:** + +- `torch_model` (torch.nn.Module): a torch model + +**Returns:** + +- `bool`: whether this torch model contains any QNN layer. + +______________________________________________________________________ + + + ## function `convert_torch_tensor_or_numpy_array_to_numpy_array` ```python @@ -35,7 +57,7 @@ Convert a torch tensor or a numpy array to a numpy array. ______________________________________________________________________ - + ## function `build_quantized_module` @@ -67,7 +89,7 @@ Take a model in torch or ONNX, turn it to numpy, quantize its inputs / weights / ______________________________________________________________________ - + ## function `compile_torch_model` @@ -79,11 +101,12 @@ compile_torch_model( configuration: Optional[Configuration] = None, artifacts: Optional[DebugArtifacts] = None, show_mlir: bool = False, - n_bits=8, + n_bits: Union[int, Dict[str, int]] = 8, rounding_threshold_bits: Optional[int] = None, p_error: Optional[float] = None, global_p_error: Optional[float] = None, - verbose: bool = False + verbose: bool = False, + inputs_encryption_status: Optional[Sequence[str]] = None ) → QuantizedModule ``` @@ -104,6 +127,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs - `p_error` (Optional\[float\]): probability of error of a single PBS - `global_p_error` (Optional\[float\]): probability of error of the full circuit. In FHE simulation `global_p_error` is set to 0 - `verbose` (bool): whether to show compilation information +- `inputs_encryption_status` (Optional\[Sequence\[str\]\]): encryption status ('clear', 'encrypted') for each input. By default all arguments will be encrypted. **Returns:** @@ -111,7 +135,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs ______________________________________________________________________ - + ## function `compile_onnx_model` @@ -123,11 +147,12 @@ compile_onnx_model( configuration: Optional[Configuration] = None, artifacts: Optional[DebugArtifacts] = None, show_mlir: bool = False, - n_bits: Union[int, Dict] = 8, + n_bits: Union[int, Dict[str, int]] = 8, rounding_threshold_bits: Optional[int] = None, p_error: Optional[float] = None, global_p_error: Optional[float] = None, - verbose: bool = False + verbose: bool = False, + inputs_encryption_status: Optional[Sequence[str]] = None ) → QuantizedModule ``` @@ -148,6 +173,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs - `p_error` (Optional\[float\]): probability of error of a single PBS - `global_p_error` (Optional\[float\]): probability of error of the full circuit. In FHE simulation `global_p_error` is set to 0 - `verbose` (bool): whether to show compilation information +- `inputs_encryption_status` (Optional\[Sequence\[str\]\]): encryption status ('clear', 'encrypted') for each input. By default all arguments will be encrypted. **Returns:** @@ -155,7 +181,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs ______________________________________________________________________ - + ## function `compile_brevitas_qat_model` @@ -163,7 +189,7 @@ ______________________________________________________________________ compile_brevitas_qat_model( torch_model: Module, torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]], - n_bits: Optional[int, dict] = None, + n_bits: Optional[int, Dict[str, int]] = None, configuration: Optional[Configuration] = None, artifacts: Optional[DebugArtifacts] = None, show_mlir: bool = False, @@ -171,7 +197,8 @@ compile_brevitas_qat_model( p_error: Optional[float] = None, global_p_error: Optional[float] = None, output_onnx_file: Union[NoneType, Path, str] = None, - verbose: bool = False + verbose: bool = False, + inputs_encryption_status: Optional[Sequence[str]] = None ) → QuantizedModule ``` @@ -192,6 +219,7 @@ The torch_model parameter is a subclass of torch.nn.Module that uses quantized o - `global_p_error` (Optional\[float\]): probability of error of the full circuit. In FHE simulation `global_p_error` is set to 0 - `output_onnx_file` (str): temporary file to store ONNX model. If None a temporary file is generated - `verbose` (bool): whether to show compilation information +- `inputs_encryption_status` (Optional\[Sequence\[str\]\]): encryption status ('clear', 'encrypted') for each input. By default all arguments will be encrypted. **Returns:** diff --git a/docs/developer-guide/api/concrete.ml.torch.hybrid_model.md b/docs/developer-guide/api/concrete.ml.torch.hybrid_model.md index 323256541..0ff3e0901 100644 --- a/docs/developer-guide/api/concrete.ml.torch.hybrid_model.md +++ b/docs/developer-guide/api/concrete.ml.torch.hybrid_model.md @@ -12,7 +12,7 @@ Implement the conversion of a torch model to a hybrid fhe/torch inference. ______________________________________________________________________ - + ## function `tuple_to_underscore_str` @@ -32,7 +32,27 @@ Convert a tuple to a string representation. ______________________________________________________________________ - + + +## function `underscore_str_to_tuple` + +```python +underscore_str_to_tuple(tup: str) → Tuple +``` + +Convert a a string representation of a tuple to a tuple. + +**Args:** + +- `tup` (str): a string representing the tuple + +**Returns:** + +- `Tuple`: a tuple to change into string representation + +______________________________________________________________________ + + ## function `convert_conv1d_to_linear` @@ -52,7 +72,7 @@ Convert all Conv1D layers in a module or a Conv1D layer itself to nn.Linear. ______________________________________________________________________ - + ## class `HybridFHEMode` @@ -60,13 +80,13 @@ Simple enum for different modes of execution of HybridModel. ______________________________________________________________________ - + ## class `RemoteModule` A wrapper class for the modules to be done remotely with FHE. - + ### method `__init__` @@ -82,12 +102,12 @@ __init__( ______________________________________________________________________ - + ### method `forward` ```python -forward(x: Tensor) → Tensor +forward(x: Tensor) → Union[Tensor, QuantTensor] ``` Forward pass of the remote module. @@ -113,7 +133,7 @@ To change the behavior of this forward function one must change the fhe_local_mo ______________________________________________________________________ - + ### method `init_fhe_client` @@ -137,7 +157,7 @@ Set the clients keys. ______________________________________________________________________ - + ### method `remote_call` @@ -157,7 +177,7 @@ Call the remote server to get the private module inference. ______________________________________________________________________ - + ## class `HybridFHEModel` @@ -173,7 +193,7 @@ This is done by converting targeted modules by RemoteModules. This will modify t - `model_name` (str): Model name identifier - `verbose` (int): If logs should be printed when interacting with FHE server - + ### method `__init__` @@ -189,14 +209,14 @@ __init__( ______________________________________________________________________ - + ### method `compile_model` ```python compile_model( x: Tensor, - n_bits: int = 8, + n_bits: Union[int, Dict[str, int]] = 8, rounding_threshold_bits: Optional[int] = None, p_error: Optional[float] = None, configuration: Optional[Configuration] = None @@ -215,7 +235,7 @@ Compiles the specific layers to FHE. ______________________________________________________________________ - + ### method `init_client` @@ -235,7 +255,7 @@ Initialize client for all remote modules. ______________________________________________________________________ - + ### method `publish_to_hub` @@ -247,7 +267,7 @@ Allow the user to push the model and FHE required files to HF Hub. ______________________________________________________________________ - + ### method `save_and_clear_private_info` @@ -261,3 +281,264 @@ Save the PyTorch model to the provided path and also saves the corresponding FHE - `path` (Path): The directory where the model and the FHE circuit will be saved. - `via_mlir` (bool): if fhe circuits should be serialized using via_mlir option useful for cross-platform (compile on one architecture and run on another) + +______________________________________________________________________ + + + +### method `set_fhe_mode` + +```python +set_fhe_mode(hybrid_fhe_mode: Union[str, HybridFHEMode]) +``` + +Set Hybrid FHE mode for all remote modules. + +**Args:** + +- `hybrid_fhe_mode` (Union\[str, HybridFHEMode\]): Hybrid FHE mode to set to all remote modules. + +______________________________________________________________________ + + + +## class `LoggerStub` + +Placeholder type for a typical logger like the one from loguru. + +______________________________________________________________________ + + + +### method `info` + +```python +info(msg: str) +``` + +Placholder function for logger.info. + +**Args:** + +- `msg` (str): the message to output + +______________________________________________________________________ + + + +## class `HybridFHEModelServer` + +Hybrid FHE Model Server. + +This is a class object to server FHE models serialized using HybridFHEModel. + + + +### method `__init__` + +```python +__init__(key_path: Path, model_dir: Path, logger: Optional[LoggerStub]) +``` + +______________________________________________________________________ + + + +### method `add_key` + +```python +add_key(key: bytes, model_name: str, module_name: str, input_shape: str) +``` + +Add public key. + +**Arguments:** + +- `key` (bytes): public key +- `model_name` (str): model name +- `module_name` (str): name of the module in the model +- `input_shape` (str): input shape of said module + +**Returns:** +Dict\[str, str\] +\- uid: uid a personal uid + +______________________________________________________________________ + + + +### method `check_inputs` + +```python +check_inputs( + model_name: str, + module_name: Optional[str], + input_shape: Optional[str] +) +``` + +Check that the given configuration exist in the compiled models folder. + +**Args:** + +- `model_name` (str): name of the model +- `module_name` (Optional\[str\]): name of the module in the model +- `input_shape` (Optional\[str\]): input shape of the module + +**Raises:** + +- `ValueError`: if the given configuration does not exist. + +______________________________________________________________________ + + + +### method `compute` + +```python +compute( + model_input: bytes, + uid: str, + model_name: str, + module_name: str, + input_shape: str +) +``` + +Compute the circuit over encrypted input. + +**Arguments:** + +- `model_input` (bytes): input of the circuit +- `uid` (str): uid of the public key to use +- `model_name` (str): model name +- `module_name` (str): name of the module in the model +- `input_shape` (str): input shape of said module + +**Returns:** + +- `bytes`: the result of the circuit + +______________________________________________________________________ + + + +### method `dump_key` + +```python +dump_key(key_bytes: bytes, uid: Union[UUID, str]) → None +``` + +Dump a public key on the file system. + +**Args:** + +- `key_bytes` (bytes): public serialized key +- `uid` (Union\[str, uuid.UUID\]): uid of the public key to dump + +______________________________________________________________________ + + + +### method `get_circuit` + +```python +get_circuit(model_name, module_name, input_shape) +``` + +Get circuit based on model name, module name and input shape. + +**Args:** + +- `model_name` (str): name of the model +- `module_name` (str): name of the module in the model +- `input_shape` (str): input shape of the module + +**Returns:** + +- `FHEModelServer`: a fhe model server of the given module of the given model for the given shape + +______________________________________________________________________ + + + +### method `get_client` + +```python +get_client(model_name: str, module_name: str, input_shape: str) +``` + +Get client. + +**Args:** + +- `model_name` (str): name of the model +- `module_name` (str): name of the module in the model +- `input_shape` (str): input shape of the module + +**Returns:** + +- `Path`: the path to the correct client + +**Raises:** + +- `ValueError`: if client couldn't be found + +______________________________________________________________________ + + + +### method `list_modules` + +```python +list_modules(model_name: str) +``` + +List all modules in a model. + +**Args:** + +- `model_name` (str): name of the model + +**Returns:** +Dict\[str, Dict\[str, Dict\]\] + +______________________________________________________________________ + + + +### method `list_shapes` + +```python +list_shapes(model_name: str, module_name: str) +``` + +List all modules in a model. + +**Args:** + +- `model_name` (str): name of the model +- `module_name` (str): name of the module in the model + +**Returns:** +Dict\[str, Dict\] + +______________________________________________________________________ + + + +### method `load_key` + +```python +load_key(uid: Union[str, UUID]) → bytes +``` + +Load a public key from the file system. + +**Args:** + +- `uid` (Union\[str, uuid.UUID\]): uid of the public key to load + +**Returns:** + +- `bytes`: the bytes of the public key