Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chapter about CPU features dispatching into docs #2945

Merged
merged 15 commits into from
Oct 29, 2024
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,11 @@ For your convenience we also added [coding guidelines](http://oneapi-src.github.

## Custom Components

### CPU Features Dispatching

oneDAL provides multiarchitecture binaries that contain codes for multiple variants of CPU instruction set architectures. When run on a certain hardware type, oneDAL chooses the code path which is most suitable for this particular hardware to achieve better performance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: The term architecture is overloaded. Can we find more precise language here? Different ISA extensions (e.g. avx2, avx512) can be supported in the same binary, but it should be made clear that it's only variations on the same base ISA that are allowed. That is to cover adding documentation for Arm and RISC-V support in the future.

What do you think for the following phrasing?

oneDAL provides binaries that can contain code targeting different architectural extensions of a base instruction set architecture (ISA). For example, code paths can exist for SSE2, AVX2, AVX512, etc, on top of the x86-64 base architecture. Specialisations can exist for specific implementations (e.g. skylake-x, nehalem, etc). When run on a specific hardware implementation, oneDAL chooses the code path which is most suitable for that implementation.

I still don't think that is ideal, but I hope it illustrates the differentiation between ISA extension and ISA that I want to make clearer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good observation. Currently in the chapter I do not make the distinction between the ISA in broader meaning (like x86, RISC-V, ARM, ...) and ISA extensions.
I will update the docs in accordance with your suggestion. It is hard for me to come up with a better wording for ISA and ISA extensions as well.

Contributors should leverage [CPU Features Dispatching](http://oneapi-src.github.io/oneDAL/contribution/cpu_features.html) mechanism to implement the code of the algorithms that can perform well on various hardware types.

### Threading Layer

In the source code of the algorithms, oneDAL does not use threading primitives directly. All the threading primitives used within oneDAL form are called the [threading layer](http://oneapi-src.github.io/oneDAL/contribution/threading.html). Contributors should leverage the primitives from the layer to implement parallel algorithms.
Expand Down
218 changes: 218 additions & 0 deletions docs/source/contribution/cpu_features.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,218 @@
.. ******************************************************************************
.. * Copyright contributors to the oneDAL project
david-cortes-intel marked this conversation as resolved.
Show resolved Hide resolved
.. *
.. * Licensed under the Apache License, Version 2.0 (the "License");
.. * you may not use this file except in compliance with the License.
.. * You may obtain a copy of the License at
.. *
.. * http://www.apache.org/licenses/LICENSE-2.0
.. *
.. * Unless required by applicable law or agreed to in writing, software
.. * distributed under the License is distributed on an "AS IS" BASIS,
.. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. * See the License for the specific language governing permissions and
.. * limitations under the License.
.. *******************************************************************************/

.. highlight:: cpp

CPU Features Dispatching
^^^^^^^^^^^^^^^^^^^^^^^^

For each algorithm |short_name| provides several code paths for x86-64-compatibe instruction
set architectures.

Following architectures are currently supported:

- Intel\ |reg|\ Streaming SIMD Extensions 2 (Intel\ |reg|\ SSE2)
- Intel\ |reg|\ Streaming SIMD Extensions 4.2 (Intel\ |reg|\ SSE4.2)
- Intel\ |reg|\ Advanced Vector Extensions 2 (Intel\ |reg|\ AVX2)
- Intel\ |reg|\ Advanced Vector Extensions 512 (Intel\ |reg|\ AVX-512)

The particular code path is chosen at runtime based on the underlying hardware characteristics.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The particular code path is chosen at runtime based on the underlying hardware characteristics.
The particular code path is chosen at runtime based on underlying hardware properties.


This chapter describes how the code is organized to support this variety of instruction sets.

Algorithm Implementation Options
********************************

In addition to the instruction set architectures, an algorithm in |short_name| may have various
implementation options. Below is a description of these options to help you better understand
the |short_name| code structure and conventions.

Computational Tasks
-------------------

An algorithm might have various tasks to compute. The most common options are:

- `Classification <https://oneapi-src.github.io/oneDAL/onedal/glossary.html#term-Classification>`_,
- `Regression <https://oneapi-src.github.io/oneDAL/onedal/glossary.html#term-Regression>`_.

Computational Stages
--------------------

An algorithm might have ``training`` and ``inference`` computation stages aimed
at training a model on the input dataset and computing the inference results, respectively.

Computational Methods
---------------------

An algorithm can support several methods for the same type of computations.
For example, kNN algorithm supports
`brute_force <https://oneapi-src.github.io/oneDAL/onedal/algorithms/nearest-neighbors/knn.html#knn-t-math-brute-force>`_
and `kd_tree <https://oneapi-src.github.io/oneDAL/onedal/algorithms/nearest-neighbors/knn.html#knn-t-math-kd-tree>`_
methods for algorithm training and inference.

Computational Modes
-------------------

|short_name| can provide several computaional modes for an algorithm.
See `Computaional Modes <https://oneapi-src.github.io/oneDAL/onedal/programming-model/computational-modes.html>`_
chapter for details.

Folders and Files
*****************

Suppose that you are working on some algorithm ``Abc`` in |short_name|.

The part of the implementation of this algorithms that is running on CPU should be located in
`cpp/daal/src/algorithms/abc` folder.

Suppose that it provides:

- ``classification`` and ``regression`` learning tasks;
- ``training`` and ``inference`` stages;
- ``method1`` and ``method2`` for the ``training`` stage and only ``method1`` for ``inference`` stage;
- only ``batch`` computational mode.

Then the `cpp/daal/src/algorithms/abc` folder should contain at least the following files:

::

cpp/daal/src/algorithms/abc/
|-- abc_classification_predict_method1_batch_fpt_cpu.cpp
david-cortes-intel marked this conversation as resolved.
Show resolved Hide resolved
|-- abc_classification_predict_method1_impl.i
|-- abc_classification_predict_kernel.h
|-- abc_classification_train_method1_batch_fpt_cpu.cpp
|-- abc_classification_train_method2_batch_fpt_cpu.cpp
|-- abc_classification_train_method1_impl.i
|-- abc_classification_train_method2_impl.i
|-- abc_classification_train_kernel.h
|-- abc_regression_predict_method1_batch_fpt_cpu.cpp
|-- abc_regression_predict_method1_batch_fpt_cpu.cpp
|-- abc_regression_predict_method1_impl.i
|-- abc_regression_predict_kernel.h
|-- abc_regression_train_method1_batch_fpt_cpu.cpp
david-cortes-intel marked this conversation as resolved.
Show resolved Hide resolved
|-- abc_regression_train_method2_batch_fpt_cpu.cpp
|-- abc_regression_train_method1_impl.i
|-- abc_regression_train_method2_impl.i
|-- abc_regression_train_kernel.h

Alternative variant of the folder structure to avoid storing too many files within a single folder
could be:

::

cpp/daal/src/algorithms/abc/
|-- classification/
| |-- abc_classification_predict_method1_batch_fpt_cpu.cpp
| |-- abc_classification_predict_method1_impl.i
| |-- abc_classification_predict_kernel.h
| |-- abc_classification_train_method1_batch_fpt_cpu.cpp
| |-- abc_classification_train_method2_batch_fpt_cpu.cpp
| |-- abc_classification_train_method1_impl.i
| |-- abc_classification_train_method2_impl.i
| |-- abc_classification_train_kernel.h
|-- regression/
|-- abc_regression_predict_method1_batch_fpt_cpu.cpp
|-- abc_regression_predict_method1_impl.i
|-- abc_regression_predict_kernel.h
|-- abc_regression_train_method1_batch_fpt_cpu.cpp
|-- abc_regression_train_method2_batch_fpt_cpu.cpp
|-- abc_regression_train_method1_impl.i
|-- abc_regression_train_method2_impl.i
|-- abc_regression_train_kernel.h

The names of the files stay the same in this case, just the folder layout differs.

Further the purpose and contents of each file are to be described on the example of classification
training task. For other types of the tasks the structure of the code is similar.

\*_kernel.h
-----------

Those files contain the definitions of one or several template classes that define member functions that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Don't start the section with a pronoun. Put a full description of what you are describing. Maybe:

In the directory structure introduced in the last section, there are files with a `_kernel.h` suffix. These contain the definitions of ...

do the actual computations. Here is a variant of the ``Abc`` training algorithm kernel definition in the file
`abc_classification_train_kernel.h`:

.. include:: ../includes/cpu_features/abc-classification-train-kernel.rst

Typical template parameters are:

- ``algorithmFPType`` Data type to use in intermediate computations for the algorithm,
``float`` or ``double``.
Comment on lines +166 to +167
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ``algorithmFPType`` Data type to use in intermediate computations for the algorithm,
``float`` or ``double``.
- ``algorithmFPType`` Data type to use in intermediate computations for the algorithm.
Must be one of ``float`` or ``double``.

- ``method`` Computational methods of the algorithm. ``method1`` or ``method2`` in the case of ``Abc``.
- ``cpu`` Version of the cpu-specific implementation of the algorithm, ``daal::CpuType``.

Implementations for different methods are usually defined using partial class templates specialization.

\*_impl.i
---------

Those files contain the implementations of the computational functions defined in `*_kernel.h` files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Don't start the section with a pronoun. See the similar comment at the start of the \*_kernel.h section

Here is a variant of ``method1`` imlementation for ``Abc`` training algorithm that does not contain any
instruction set specific code. The implementation is located in the file `abc_classification_train_method1_impl.i`:

.. include:: ../includes/cpu_features/abc-classification-train-method1-impl.rst

Although the implementation of the ``method1`` does not contain any instruction set specific code, it is
expected that the developers leverage SIMD related macros available in |short_name|.
For example, ``PRAGMA_IVDEP``, ``PRAGMA_VECTOR_ALWAYS``, ``PRAGMA_VECTOR_ALIGNED`` and others pragmas defined in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For example, ``PRAGMA_IVDEP``, ``PRAGMA_VECTOR_ALWAYS``, ``PRAGMA_VECTOR_ALIGNED`` and others pragmas defined in
For example, ``PRAGMA_IVDEP``, ``PRAGMA_VECTOR_ALWAYS``, ``PRAGMA_VECTOR_ALIGNED`` and other pragmas defined in

`service_defines.h <https://github.com/oneapi-src/oneDAL/blob/main/cpp/daal/src/services/service_defines.h>`_.
This will guide the compiler to generate more efficient code for the target architecture.

Consider that the implementation of the ``method2`` for the same algorithm will be different and will contain
AVX-512-specific code located in ``cpuSpecificCode`` function. Note that all the compiler-specific code should
be placed under compiler-specific defines. For example, the Intel\ |reg|\ oneAPI DPC++/C++ Compiler specific code
should be placed under ``DAAL_INTEL_CPP_COMPILER`` define. All the CPU-specific code should be placed under
CPU-specific defines. For example, the AVX-512 specific code should be placed under
``__CPUID__(DAAL_CPU) == __avx512__``.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
be placed under compiler-specific defines. For example, the Intel\ |reg|\ oneAPI DPC++/C++ Compiler specific code
should be placed under ``DAAL_INTEL_CPP_COMPILER`` define. All the CPU-specific code should be placed under
CPU-specific defines. For example, the AVX-512 specific code should be placed under
``__CPUID__(DAAL_CPU) == __avx512__``.
be gated by values of compiler-specific defines. For example, the Intel\ |reg|\ oneAPI DPC++/C++ Compiler specific code
should be gated by the existence of the ``DAAL_INTEL_CPP_COMPILER`` define. All the CPU-specific code should be gated on the value of
CPU-specific defines. For example, the AVX-512 specific code should be gated on the value
``__CPUID__(DAAL_CPU) == __avx512__``.


Then the implementation of the ``method2`` in the file `abc_classification_train_method2_impl.i` will look like:

.. include:: ../includes/cpu_features/abc-classification-train-method2-impl.rst

\*_fpt_cpu.cpp
--------------

Those files contain the instantiations of the template classes defined in `*_kernel.h` files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Don't start a section with a pronoun. See the similar comment at the start of the \*_kernel.h section

The instatiation of the ``Abc`` training algorithm kernel for ``method1`` is located in the file
`abc_classification_train_method1_batch_fpt_cpu.cpp`:

.. include:: ../includes/cpu_features/abc-classification-train-method1-fpt-cpu.rst

`_fpt_cpu.cpp` files are not compiled directly into object files. First, multiple copies of those files
are made replacing the ``fpt``, which stands for 'floating point type', and ``cpu`` parts of the file name
as well as the corresponding ``DAAL_FPTYPE`` and ``DAAL_CPU`` macros with the actual data type and CPU type values.
Then the resulting files are compiled with appropriate CPU-specific optimization compiler options.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Then the resulting files are compiled with appropriate CPU-specific optimization compiler options.
Then the resulting files are compiled with appropriate CPU-specific compiler optimization options.


The values for ``fpt`` file name part replacement are:

- ``flt`` for ``float`` data type, and
- ``dbl`` for ``double`` data type.

The values for ``DAAL_FPTYPE`` macro replacement are ``float`` and ``double``, respectively.

The values for ``cpu`` file name part replacement are:

- ``nrh`` for Intel\ |reg|\ SSE2 architecture, which stands for Northwood,
- ``neh`` for Intel\ |reg|\ SSE4.2 architecture, which stands for Nehalem,
- ``hsw`` for Intel\ |reg|\ AVX2 architecture, which stands for Haswell,
- ``skx`` for Intel\ |reg|\ AVX-512 architecture, which stands for Skylake-X.

The values for ``DAAL_CPU`` macro replacement are:

- ``__sse2__`` for Intel\ |reg|\ SSE2 architecture,
- ``__sse42__`` for Intel\ |reg|\ SSE4.2 architecture,
- ``__avx2__`` for Intel\ |reg|\ AVX2 architecture,
- ``__avx512__`` for Intel\ |reg|\ AVX-512 architecture.
26 changes: 13 additions & 13 deletions docs/source/contribution/threading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,20 +19,20 @@
Threading Layer
^^^^^^^^^^^^^^^

oneDAL uses Intel\ |reg|\ oneAPI Threading Building Blocks (Intel\ |reg|\ oneTBB) to do parallel
|short_name| uses Intel\ |reg|\ oneAPI Threading Building Blocks (Intel\ |reg|\ oneTBB) to do parallel
computations on CPU.

But oneTBB is not used in the code of oneDAL algorithms directly. The algorithms rather
But oneTBB is not used in the code of |short_name| algorithms directly. The algorithms rather
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't start a paragraph with a conjunction ('but'). Combine the paragraphs:

... computations on CPU. oneTBB is not used in the code ...

use custom primitives that either wrap oneTBB functionality or are in-house developed.
Those primitives form oneDAL's threading layer.
Those primitives form |short_name|'s threading layer.

This is done in order not to be dependent on possible oneTBB API changes and even
on the particular threading technology like oneTBB, C++11 standard threads, etc.

The API of the layer is defined in
`threading.h <https://github.com/oneapi-src/oneDAL/blob/main/cpp/daal/src/threading/threading.h>`_.
Please be aware that the threading API is not a part of oneDAL product API.
This is the product internal API that aimed to be used only by oneDAL developers, and can be changed at any time
Please be aware that the threading API is not a part of |short_name| product API.
This is the product internal API that aimed to be used only by |short_name| developers, and can be changed at any time
without any prior notification.

This chapter describes common parallel patterns and primitives of the threading layer.
Expand All @@ -46,7 +46,7 @@ Here is a variant of sequential implementation:

.. include:: ../includes/threading/sum-sequential.rst

There are several options available in the threading layer of oneDAL to let the iterations of this code
There are several options available in the threading layer of |short_name| to let the iterations of this code
run in parallel.
One of the options is to use ``daal::threader_for`` as shown here:

Expand All @@ -59,10 +59,10 @@ Blocking
--------

To have more control over the parallel execution and to increase
`cache locality <https://en.wikipedia.org/wiki/Locality_of_reference>`_ oneDAL usually splits
`cache locality <https://en.wikipedia.org/wiki/Locality_of_reference>`_ |short_name| usually splits
the data into blocks and then processes those blocks in parallel.

This code shows how a typical parallel loop in oneDAL looks like:
This code shows how a typical parallel loop in |short_name| looks like:

.. include:: ../includes/threading/sum-parallel-by-blocks.rst

Expand Down Expand Up @@ -92,7 +92,7 @@ Checking the status right after the initialization code won't show the allocatio
because oneTBB uses lazy evaluation and the lambda function passed to the constructor of the TLS
is evaluated on first use of the thread-local storage (TLS).

There are several options available in the threading layer of oneDAL to compute the partial
There are several options available in the threading layer of |short_name| to compute the partial
dot product results at each thread.
One of the options is to use the already mentioned ``daal::threader_for`` and blocking approach
as shown here:
Expand Down Expand Up @@ -126,7 +126,7 @@ is more performant to use predefined mapping of the loop's iterations to threads
This is what static work scheduling does.

``daal::static_threader_for`` and ``daal::static_tls`` allow implementation of static
work scheduling within oneDAL.
work scheduling within |short_name|.

Here is a variant of parallel dot product computation with static scheduling:

Expand All @@ -135,7 +135,7 @@ Here is a variant of parallel dot product computation with static scheduling:
Nested Parallelism
******************

oneDAL supports nested parallel loops.
|short_name| supports nested parallel loops.
It is important to know that:

"when a parallel construct calls another parallel construct, a thread can obtain a task
Expand All @@ -154,13 +154,13 @@ oneTBB provides ways to isolate execution of a parallel construct, for its tasks
to not interfere with other simultaneously running tasks.

Those options are preferred when the parallel loops are initially written as nested.
But in oneDAL there are cases when one parallel algorithm, the outer one,
But in |short_name| there are cases when one parallel algorithm, the outer one,
calls another parallel algorithm, the inner one, within a parallel region.

The inner algorithm in this case can also be called solely, without additional nesting.
And we do not always want to make it isolated.

For the cases like that, oneDAL provides ``daal::ls``. Its ``local()`` method always
For the cases like that, |short_name| provides ``daal::ls``. Its ``local()`` method always
returns the same value for the same thread, regardless of the nested execution:

.. include:: ../includes/threading/nested-parallel-ls.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
.. ******************************************************************************
.. * Copyright contributors to the oneDAL project
.. *
.. * Licensed under the Apache License, Version 2.0 (the "License");
.. * you may not use this file except in compliance with the License.
.. * You may obtain a copy of the License at
.. *
.. * http://www.apache.org/licenses/LICENSE-2.0
.. *
.. * Unless required by applicable law or agreed to in writing, software
.. * distributed under the License is distributed on an "AS IS" BASIS,
.. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. * See the License for the specific language governing permissions and
.. * limitations under the License.
.. *******************************************************************************/

::

#ifndef __ABC_CLASSIFICATION_TRAIN_KERNEL_H__
#define __ABC_CLASSIFICATION_TRAIN_KERNEL_H__

#include "src/algorithms/kernel.h"
#include "data_management/data/numeric_table.h" // NumericTable class
/* Other necessary includes go here */

using namespace daal::data_management; // NumericTable class

namespace daal::algorithms::abc::training::internal
{
/* Dummy base template class */
template <typename algorithmFPType, Method method, CpuType cpu>
class AbcClassificationTrainingKernel : public Kernel
{};

/* Computational kernel for 'method1' of the Abc training algoirthm */
template <typename algorithmFPType, CpuType cpu>
class AbcClassificationTrainingKernel<algorithmFPType, method1, cpu> : public Kernel
{
public:
services::Status compute(/* Input and output arguments for the 'method1' */);
};

/* Computational kernel for 'method2' of the Abc training algoirthm */
template <typename algorithmFPType, CpuType cpu>
class AbcClassificationTrainingKernel<algorithmFPType, method2, cpu> : public Kernel
{
public:
services::Status compute(/* Input and output arguments for the 'method2' */);
};

} // namespace daal::algorithms::abc::training::internal

#endif // __ABC_CLASSIFICATION_TRAIN_KERNEL_H__
Loading