Skip to content
This repository has been archived by the owner on Mar 31, 2019. It is now read-only.

Vision Tracking

Manu Singhal edited this page Feb 6, 2016 · 2 revisions

Our vision tracking took many forms throughout the season and this page will log those instances.

We first attempted to use the Pixy Cam to do vision tracking, since it returned the coordinate values for the blocks it saw. We tried to receive the values from the Pixy through an Arduino as a middle-man, but we were unable to get effective communications between the RIO and the Arduino through I2C or RS-232. We wasted 3 weeks doing this.

We then decided to make our own vision tracking code through OpenCV before one of our mentors showed us a swag program called RoboRealm. It was everything we wanted, except we were told it would need to run on a separate processor. We started looking for such a processor, but we realized that we could run the processing off of the driver station computer.

Once we picked which processing system to use, we hate to choose a camera that would be effective and was compatible with the Smart Dashboard we wanted to use (Smart Dashboard 2.0). We attempted to use the Microsoft LifeCam HD 3000 (the webcam), but it was not compatible with the new dashboard. If we wanted to use it, we would have to use the previous dashboard, which has a harder time configuring and connecting to the robot. We decided against it since the Axis Camera was able to connect to the dashboard with less setup.

We are now using the Axis camera in conjunction with the Smart Dashboard 2.0 and Roborealm. We have gotten good results with them.

Clone this wiki locally