Merlin: HugeCTR 23.08
What's New in Version 23.08
-
Hierarchical Parameter Server:
- Support static EC fp8 quantization
We already support quantization for fp8 in the static cache. HPS will perform fp8 quantization on the embedding vector when reading the embedding table by enable fp8_quant configuration, and perform fp32 dequantization on the embedding vector corresponding to the queried embedding key in the static embedding cache, so as to ensure the accuracy of dense part prediction. - Large model deployment demo based on HPS TensorRT-plugin
This demo shows how to use the HPS TRT-plugin to build a complete TRT engine for deploying a 147GB embedding table based on a 1TB Criteo dataset. We also provide static embedding implementation for fully offloading embedding tables to host page-locke memory for benchmarks on x86 and Grace Hopper Superchip. - Issues Fixed
- Resolve Kafka update ingestion error. There was an error that prevented handing over online parameter updates coming from Kafka message queues to Redis database backends.
- Fixed HPS Triton backend re-initializing the embedding cache issue due to undefined null when getting the embedded cache on the corresponding device.
- Support static EC fp8 quantization
-
HugeCTR Training & SOK:
- Dense Embedding Support in Embedding Collection
We add the dense embedding in embedding collection. To use the dense embedding, a user just needs to specify the_concat_
as the combiner. For more information, please refer to dense_embedding.py. - Refinement of sequence mask layer and attention softmax layer to support cross-attention.
- We introduce a more generalized reshape layer which allows user to reshape source tensor to destination tensor without dimension restriction. Please refer Reshape Layer API for more detailed information
- Issues Fixed
- Fix error when using Localized Variable in Sparse Operation Kit
- Fix bug in Sparse Operation Kit backward computing.
- Fix some SOK performance bugs by replacing the calls to
DeviceSegmentedSort
withDeviceSegmentedRadixSort
- Fix a bug from the SOK's Python API side, which led to the duplicate calls to the model's forward function and thus degraded the performance.
- Reduce the CPU launch overhead
- Remove dynamic vector allocation in DataDistributor
- Remove the use of the checkout value tensor from the DataReader. The data reader generates a nested std::vector on-the-fly and returns the vector to the embedding collection, which incur lots of host overhead. We have made it a class member so that the overhead can be eliminated.
- Align with the latest parquet update.
We have fixed a bug due to the parquet_reader_options::set_num_rows() update of cudf 23.06: PR . - Fix core23 assertion of debug mode
We have fixed an assertion bug while the new core library is enabled if HugeCTR is built in debug mode.
- Dense Embedding Support in Embedding Collection
-
General Updates:
- Cleaned up logging code. Added compile-time format-string validation. Fixed issue where HCTR_PRINT did not interpret format strings properly.
- Enabled the experimental enablement of the static CUDA runtime. Use
-DUSE_CUDART_STATIC=ON
in cmak'ing - Modified the data preprocessing documentation to clarify the correct commands to use in different situations. Fixed the error of the description of arguments
-
Known Issues:
-
HugeCTR can lead to a runtime error if client code calls RMM’s
rmm::mr::set_current_device_resource()
orrmm::mr::set_current_device_resource()
because HugeCTR’s Parquet Data Reader also callsrmm::mr::set_current_device_resource()
, and it becomes visible to other libraries in the same process. Refer to [this issue] (#356) . As a workaround, a user can set an environment variableHCTR_RMM_SETTABLE
to 0 to disable HugeCTR to set a custom RMM device resource, if they knowrmm::mr::set_current_device_resource()
is called outside HugeCTR. But be cautious, as it could affect the performance of parquet reading. -
HugeCTR uses NCCL to share data between ranks and NCCL can require shared system memory for IPC and pinned (page-locked) system memory resources.
If you use NCCL inside a container, increase these resources by specifying the following arguments when you start the container:-shm-size=1g -ulimit memlock=-1
See also this NCCL known issue and this GitHub issue](#243).
-
KafkaProducers
startup succeeds even if the target Kafka broker is unresponsive.
To avoid data loss in conjunction with streaming-model updates from Kafka, you have to make sure that a sufficient number of Kafka brokers are running, operating properly, and reachable from the node where you run HugeCTR. -
The number of data files in the file list should be greater than or equal to the number of data reader workers.
Otherwise, different workers are mapped to the same file and data loading does not progress as expected. -
Joint loss training with a regularizer is not supported.
-
Dumping Adam optimizer states to AWS S3 is not supported.
-