-
Notifications
You must be signed in to change notification settings - Fork 332
hand detection.PyTorch
daigomiyoshi edited this page Oct 26, 2020
·
2 revisions
The warnings below has occurred when you export the model with the original code.
/<PATH>/hand-detection.PyTorch/models/faceboxes.py:137: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
detection_dimension = torch.tensor(detection_dimension, device=x.device)
/<PATH>/hand-detection.PyTorch/models/faceboxes.py:137: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
detection_dimension = torch.tensor(detection_dimension, device=x.device)
These warnings may be caused by the reason that torch.Size will be changed to constant value when using torch.onnx.export.
To avoid it, you should change the code models/faceboxes.py
line 119-137 from
x = self.conv1(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
x = self.conv2(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
x = self.inception1(x)
x = self.inception2(x)
x = self.inception3(x)
detection_dimension.append(x.shape[2:])
sources.append(x)
x = self.conv3_1(x)
x = self.conv3_2(x)
detection_dimension.append(x.shape[2:])
sources.append(x)
x = self.conv4_1(x)
x = self.conv4_2(x)
detection_dimension.append(x.shape[2:])
sources.append(x)
detection_dimension = torch.tensor(detection_dimension, device=x.device)
to
x = self.conv1(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
x = self.conv2(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
x = self.inception1(x)
x = self.inception2(x)
x = self.inception3(x)
dd_1 = list(x.shape[2:])
sources.append(x)
x = self.conv3_1(x)
x = self.conv3_2(x)
dd_2 = list(x.shape[2:])
sources.append(x)
x = self.conv4_1(x)
x = self.conv4_2(x)
dd_3 = list(x.shape[2:])
sources.append(x)
detection_dimension = [dd_1, dd_2, dd_3]
However, the type of the output variable 'detection_dimension' has changed from 'torch.tensor' to 'list', thus you have to add post-process code after inference.
(c) 2019 ax Inc. & AXELL CORPORATION