We are starting a new series on robotic vision and computer vision and we thought that a blog which points the basic differences between them would be helpful.
Addition of sensors adds sensing abilities to your robot which means it takes Physical aspects like light, heat and converts these analog signals to digital vs your GPIO's in a single board computer or micro controller.
But these do not make your robot see or decide upon an action.
Adding an ultrasonic senor to avoid an obstacle a wall does not make your robot 'see' the walls - it makes it 'sense' it just like a bat flying at night.
So if you wish your robot to see you need to venture in a logical fashion into subjects of signal & image processing, machine learning, computer vision, robot vision and image processing.
So what is a computer vision - in simple words a camera attached to the robot which can capture, manipulate, run algorithm and output any information is vaguely computer vision.
In SID2 we detect the RED ball by the color frequencies which his camera sees. So this frequency which is the output is the information we use in a PID format.
Whereas, in case of Robotic vision we take the same image input via camera but enable the robot to do an action. If we take the same example of SID2 - here he takes the action of chasing the RED ball if the distance is less than a preset value. So SID2 in essence uses both computer vision and robotic vision as this way -
Computer vision - Check the ball's HSV color range and compare with preset HSV values
Robotic vision- If there is a HSV range match then activate code parts which link to motor drivers.
To sum up - Computer vision has a father relation to robotic vision and signal processing has a grandfather relation to robotic vision. The best way for a robotic club is to embrace open source computer vision framework and opencv makes a perfect case for this.
In our coming series , we will blog extensively on OpenCv and use some example code's with SID2.