top of page

AIM

To build a  robot that detects ink on the board and erases wherever necessary.

 

METHOD

To get feedback from the system, the webcam sends video frames to the Visual Studio C++ which contains Open CV library running on our computer. If Visual Studio C++ program detects the image of ink from the webcam then it calculates the coordinates of X, Y axis and radius of the object.The coordinates are sent accordingly to the arduino Mega/UNO via Serial communication between the arduino and visual studio c++. After receiving the co-ordinates the servo motors moves in X and Y direction and follows the object. REAL-TIME OBJECT TRACKING (FOR INK DETECTION ON THE BOARD)- USING openCV for this method we need to install openCV and have a basic knowledge of C++.

 

Color Filtering

 

Step 1 : Convert image from BGR color space to HSV color space

 

(Blue, Green, Red)->(Hue, Sauration, Value)

 

The RGB color format can represent any standard color or brightness using a combination of Red, Green and Blue components. For efficiency, this is typically stored as a 24-bit number using 8-bits for each color component (0 to 255) so that for example, White is made of 255 Red + 255 Green + 255 Blue. This is pretty much the same technique that nearly all computer screens have used for decades, and so it is the standard color format used in computer software. Unfortunately when it comes to computer vision, RGB values will vary a lot depending on strong or dim lighting conditions and shadows, etc. In comparison, HSV is much better at handling lighting differences, and it gives you an easy to use color value.

 

HSV means Hue-Saturation-Value, where the Hue is the color. And since color is not an easy thing to separate or compare, Hue is often represented as a circular angle (between 0.0 to 1.0 when stored as floats). Being a circular value means that 1.0 is the same as 0.0. For example, a Hue of 0.0 is red, a Hue of 0.25 would be green, a Hue of 0.5 is blue, a Hue of 0.75 is pink, and a Hue of 1.0 would be the same as a Hue of 0.0 which is red (again). Saturation is the greyness, so that a Saturation value near 0 means it is dull or grey looking whereas as a Saturation value of 0.8 might be a very strong color (eg: red if Hue is 0). And Value is the brightness of the pixel, so 0.1 is black and 0.9 is white. Unfortunately, there are different ways to represent HSV colors, such as whether a full brightness V of 1.0 should be bright white or a bright color. Most software chooses full brightness V to mean White, whereas OpenCV chooses full brightness V to mean a bright color.

 

 

Step 2: Filter the colors of interest between the MIN and MAX threshold.

 

void createTrackbars(){
            //create window for trackbars


    namedWindow(trackbarWindowName,0);
            //create memory to store trackbar name on window
            char TrackbarName[50];
            sprintf( TrackbarName, "H_MIN", H_MIN);
            sprintf( TrackbarName, "H_MAX", H_MAX);
            sprintf( TrackbarName, "S_MIN", S_MIN);
            sprintf( TrackbarName, "S_MAX", S_MAX);
            sprintf( TrackbarName, "V_MIN", V_MIN);
            sprintf( TrackbarName, "V_MAX", V_MAX);
            //create trackbars and insert them into window
            //3 parameters are: the address of the variable that is changing when the trackbar

            //is moved(eg.H_LOW),
            //the max value the trackbar can move (eg. H_HIGH),
            //and the function that is called whenever the trackbar is moved(eg. on_trackbar)
            //                                        
    createTrackbar( "H_MIN", trackbarWindowName, &H_MIN, H_MAX, on_trackbar );
    createTrackbar( "H_MAX", trackbarWindowName, &H_MAX, H_MAX, on_trackbar );
    createTrackbar( "S_MIN", trackbarWindowName, &S_MIN, S_MAX, on_trackbar );
    createTrackbar( "S_MAX", trackbarWindowName, &S_MAX, S_MAX, on_trackbar );
    createTrackbar( "V_MIN", trackbarWindowName, &V_MIN, V_MAX, on_trackbar );
    createTrackbar( "V_MAX", trackbarWindowName, &V_MAX, V_MAX, on_trackbar );


}

 

Once we run this code and set the MIN and MAX Threshold to detect our color of interest, the output shows a black screen with a white are. his white area is our object/ color of interest that we need to distinguish. But it is not  clear picture and has some extra white spaces which are not required.

 

 

Step 3: Morphological Operations

 

 OpenCV offers two morphological functions:

  • Dilate: This function is used to dilate the white space, meaning making the space covered by the object a little bigger to make it distinguishable.

  • Erode: This function erodes the white space, meaning it diminishes the extra white space on the screen which is not required.

 

void morphOps(Mat &thresh){

            //create structuring element that will be used to "dilate" and "erode" image.
            //the element chosen here is a 3px by 3px rectangle

            Mat erodeElement = getStructuringElement( MORPH_RECT,Size(3,3));
            //dilate with larger element so make sure object is nicely visible
            Mat dilateElement = getStructuringElement( MORPH_RECT,Size(8,8));

            erode(thresh,thresh,erodeElement);
            erode(thresh,thresh,erodeElement);


            dilate(thresh,thresh,dilateElement);
            dilate(thresh,thresh,dilateElement);
           


}

 

Contouring

           

- OpenCV offers two functions for contouring of the object:

- FindContours- This function inputs the binary image and outputs the vector of the contours (outline of all objects found in binary image)

- “Moments” method- It then inputs the vector contours and outputs  the x,y coordinates of the largest contour (defined by its inner area)

 

 

void trackFilteredObject(int &x, int &y, Mat threshold, Mat &cameraFeed){

            Mat temp;
            threshold.copyTo(temp);
            //these two vectors needed for output of findContours
            vector< vector<Point> > contours;
            vector<Vec4i> hierarchy;
            //find contours of filtered image using openCV findContours function
            findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
            //use moments method to find our filtered object
            double refArea = 0;
            bool objectFound = false;
            if (hierarchy.size() > 0) {
                        int numObjects = hierarchy.size();
        //if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
        if(numObjects<MAX_NUM_OBJECTS){
                                    for (int index = 0; index >= 0; index = hierarchy[index][0]) {

                                                Moments moment = moments((cv::Mat)contours[index]);
                                                double area = moment.m00;

                                                //if the area is less than 20 px by 20px then it is probably just                   

                                                //noise
                                                //if the area is the same as the 3/2 of the image size, probably just

                                                //a bad filter
                                                //we only want the object with the largest area so we safe a

                                                //reference area each
                                                //iteration and compare it to the area in the next iteration.
                if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
                                                            x = moment.m10/area;
                                                            y = moment.m01/area;
                                                            objectFound = true;
                                                            refArea = area;
                                                }else objectFound = false;


                                    }
                                    //let user know you found an object
                                    if(objectFound ==true){
                                                putText(cameraFeed,"TrackingObject",Point(0,50),2,1,Scalar(0,255,0),2);
                                                //draw object location on screen
                                                drawObject(x,y,cameraFeed);}

                        }else putText(cameraFeed,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
            }
}

 

 

 

SERIAL COMMUNICATION BETWEEN THE ARDUINO AND VISUAL STUDIO C++

 

Communication between an Arduino board and an openFrameworks (openCV) application is conducted through the Serial port. The most common setup looks like this: an Arduino gets connected to a computer via a USB cable. The Arduino application starts listening for data at a certain baud rate (read: speed) and the openFrameworks starts listening on a certain port (since a computer has multiple) and at a certain baud rate. Following is the code that we use to send coordinate data from OpenCV to arduino and the arduino directs the servo to move in a certain direction.

 

#include <Servo.h>

int p_fltXYRadius[0];

Servo servo;
Servo servo1;
int servoPosition = 90;
int servoPosition1=90 ;

int incomingByte = 0;   // for incoming serial data

void setup()
{
  Serial.begin(9600); // opens serial port, sets data rate to 9600 bps

  servo.attach(9); // attaches the servo on pin 9 to the servo object
  servo1.attach(10);// attaches the servo1 on pin 10 to the servo object
  servo.write(servoPosition); // set the servo at the mid position
  servo.write(servoPosition1);// set the servo1 at the mid position

}

 



Once we have the data transferred to arduino we can use the following part of code in a loop to move the servo in any desirable direction.


  if (Serial.available() > 0) {
    // read the incoming byte:
    incomingByte = Serial.read();

 


WIRELESS COMMUNICATION THROUGH XBEE

 

            The Arduino Xbee shield allows your Arduino board to communicate wirelessly using Zigbee. For our project we are using to xbee S2 and two xbee adaptors (to interface the xbee with the breadboard.)

 

Networking of Xbee: One Xbee is called the Coordinator. There can only be one coordinator in the network. If t goes down, the entire network goes down. It is in charge of setting up the network and it can never go to sleep.

            The second xbee is the Router. We can have multiple routers in a network, but for this project we just need one. They can relay signal from one router to another and can never sleep.

            The third module of the network are the endpoints. In our system, they are the two arduinos (once connected to camera and other to the servos on the body). The end points can not relay signal and therefore can be put to sleep to save power.

 

 

            The xbee can be used in two modes:

  • the AT mode: in this mode the communication goes through the xbee.

  • the API mode: in this mode we can interact with an xbee to send a command or receive the data directly from the xbee itself.

 

void loop(){

                        if (serial.available() > 21){

                                    for(int i=0; i<22;i++){

                                                Serial.print(Serial.read, HEX);

                                                Serial.print(“ ,” );

                                                }

                                    Serial.println();

            }

bottom of page