Experimenting with Log8 Data Type in hls4ml #1144
-
I’m currently working on a new log data type (8-bit logarithmic representation) and am unsure whether hls4ml supports this. If it does, what would be the conceptual approach to implementing it? I’ve developed a prototype MAC unit using Vitis HLS with the Log8 data type, and I’m now aiming to integrate it into a full design. My goal is to compare the power, performance, and area (PPA) of this new data type against others using popular DNN models. Additionally, I’m interested in understanding the key differences between hls4ml and FINN. While I know FINN is primarily focused on Xilinx FPGAs, and hls4ml is more versatile across different FPGA platforms, I’d like to understand the distinctions in terms of their methodologies, design goals, and typical use cases. Thank you |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
log-precision is not natively supported in hls4ml but can be added. To do so, you would need to extend the current types with a new class, for e.g. LogType. We do something similar for other data types, e.g. Exponential, Binary, Ternary etc.; see here: Line 225 in cc4fbf9 After that, in the model conversion step you need to make sure your weights are interpreted as the log data type. This is automatically done for QKeras and QONNX; i.e. if there is a binary/ternary/exponential weight the frontend parser will pick it up. You can see how this is done for QKeras here: hls4ml/hls4ml/model/quantizers.py Line 40 in cc4fbf9 Thirdly, you need to implement a type converter, which tells hls4ml how to convert from the Python precision class (which was implemented in the 1st step and used in the 2nd one) to a C++ variable. In your case, this is most likely going to be a struct or some higher-level wrapper around the ac_fixed datatype: hls4ml/hls4ml/backends/fpga/fpga_types.py Line 176 in cc4fbf9 Finally, you would have to implement the corresponding multiplication in hardware, which will be called by all the layer. An example for Exponential is shown here. This is typically a simple function multiplying two operands, depending on the attributes your variable from the previous step has (mantissa, exponent etc.) My example as a starting point would be to use QKeras with binary / ternary / quantized_po2 (exponential) quantisers for the weights and inspecting the output. Once the hls4ml is generated, you can check how the weights look in the output folder. You can also insert break-points / print statements in the above functions to see in which order they are called and what inputs they expect. |
Beta Was this translation helpful? Give feedback.
-
Thank you so much! I truly appreciate your response. |
Beta Was this translation helpful? Give feedback.
log-precision is not natively supported in hls4ml but can be added. To do so, you would need to extend the current types with a new class, for e.g. LogType. We do something similar for other data types, e.g. Exponential, Binary, Ternary etc.; see here:
hls4ml/hls4ml/model/types.py
Line 225 in cc4fbf9
After that, in the model conversion step you need to make sure your weights are interpreted as the log data type. This is automatically done for QKeras and QONNX; i.e. if there is a binary/ternary/exponential weight the frontend parser will pick it up. You can see how this is done for QKeras here:
hls4ml/hls4ml/model/quantizers.py