Using a Raspberry Pi and Coral edge TPU with object detection models from https://coral.ai/models/
- Raspberry Pi (used 3B+)
- Micro SD Card with Raspberry Pi OS installed
- some LEDs and resistors
- Edge TPU [with installed requirements] (https://coral.ai/docs/accelerator/get-started/#requirements)
- any Camera working with your Raspberry Pi
- SSH and VNC might help
- a screen to view the results, screencasting with VNC is not satisfying
- person in the video? -> light up yellow
- other object in the video? -> light up green
- nothing to detect? -> no reason for flashing the LED
See the getting-started guide
First wire up your LEDs for detection. In this example, i have connected the (+) of the yellow LED to GPIO-pin 8 and the (+) green LED to GPIO-pin 10. Then i connected both LEDs to a 220 Ohm resistor and connect the resistor to the GPIO GND pin.
You can test if your LEDs are correctly wired up by running python3 led_gpio_test.py
(after cloning the repository)
- cd into the cloned object_detection directory
python3 personfinder.py
- optional arguments:
- "-m" for the model path, the Mobilenet_SSD_V2 model (in this Repository) is default configured. Tested with mobilenet_ssd_v1_coco_quant_postprocess_edgetpu.tflite too.
- "-l" for the path to the labels. Also default label.txt configured.
- "-c" for the confidence factor used by the object detection model. Default value is 0.3 but you will get some false-positive reactions.
- "-o" for the label of the objects of interest. Default is only the person. If you want to detect for example cars and persons, set
-o {0, 2}
- "-d" to set the display output. Useful if you have no screen, problem is, you can't see false recognitions. Default is True, maybe this is not that conventional. TODO for me maybe 😅
- "-v" to get more verbose logging
- "-pc" is some additional playground stuff. You can set it to 0, 1, 2, 3 (default). If you set
-pc 2
, than all frames with more than 2 persons will be stored in "./images/". Be careful, this can easy fill up your storage. Probably you need tomkdir images
first. Some code-feature for later, maybe...:hourglass: - "-cf" is the camera-flip to turn around the caputred image. The object detection will work upside-down too but showing it on screen looks better right-side up. 🤘
- get some basic knowledge about object detection
- measure the performance of tflite object detection with Raspberry Pi and edge TPU
Video source: https://youtu.be/IBJsmCTYW18?t=199
Gif created by the saved frames :wrench: framerate decreases while capturing the frames...
🎥 Screencast video will come soon, maybe...