You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I understand correct. A run-of-the-mile object detection works with conv-layers to recognize patterns. Then there is a flatten-layer and a dense layer, which connects the detected pattern to a coordinate (e.g. 300 pixel in a 256X256 pixel image is equivalent to the 2x46 coordinate.
But how does this architecture does the flatten part.
I don't understand the architecture.
The text was updated successfully, but these errors were encountered:
As I understand correct. A run-of-the-mile object detection works with conv-layers to recognize patterns. Then there is a flatten-layer and a dense layer, which connects the detected pattern to a coordinate (e.g. 300 pixel in a 256X256 pixel image is equivalent to the 2x46 coordinate.
But how does this architecture does the flatten part.
I don't understand the architecture.
The text was updated successfully, but these errors were encountered: