loading...
  1. Recognizing day to day objects using Object Recognition in OpenCV (C++)

Recognizing day to day objects using Object Recognition in OpenCV (C++)

Hello,

One of the most interesting projects I’ve worked on in the past was a project about image processing. The goal was to develop a system to be able to recognize Coca-Cola ‘cans’ (note that I’m stressing the word ‘cans’, you’ll see why in a minute). You can see a sample below, with the can recognized in the green rectangle with scale and rotation.

Some constraints on the project:

The background could be very noisy.
The can could have any scale or rotation or even orientation (within reasonable limits).
The image could have some degree of fuzziness (contours might not be entirely straight).
There could be Coca-Cola bottles in the image, and the algorithm should only detect the can!
The brightness of the image could vary a lot (so you can’t rely “too much” on color detection).
The can could be partly hidden on the sides or the middle and possibly partly hidden behind a bottle.
There could be no can at all in the image, in which case you had to find nothing and write a message saying so.

My initial approach:
1. I started off by using Color detection(BGR TO HSV) and filtering based on “red” hue, saturation above a certain threshold to avoid orange-like colors, and filtering of low value to avoid dark tones. The end result was a binary black and white image, where all white pixels would represent the pixels that match this threshold. Obviously there is still a lot of crap in the image, but this reduces the number of dimensions you have to work with.
2. Noise filtering using median filtering (taking the median pixel value of all neighbors and replace the pixel by this value) to reduce noise.
3. Using Canny Edge Detection Filter to get the contours of all items after 2 precedent steps.
4. Using Generalized Hough Transform. It basically says a few things:
a. You can describe an object in space without knowing its analytical equation (which is the case here).
b. It is resistant to image deformations such as scaling and rotation, as it will basically test your image for every combination of scale factor and rotation factor.
c. It uses a base model (a template) that the algorithm will “learn”.
d. Each pixel remaining in the contour image will vote for another pixel which will supposedly be the center (in terms of gravity) of your object, based on what it learned from the model.
5. Once you have that, a simple threshold-based heuristic can give you the location of the center pixel, from which you can derive the scale and rotation and then plot your little rectangle around it (final scale and rotation factor will obviously be relative to your original template). In theory at least…

Results: Now, while this approach worked in the basic cases, it was severely lacking in some areas:
1. It is extremely slow! I’m not stressing this enough. Almost a full day was needed to process the 30 test images, obviously because I had a very high scaling factor for rotation and translation, since some of the cans were very small.
2. It was completely lost when bottles were in the image, and for some reason almost always found the bottle instead of the can (perhaps because bottles were bigger, thus had more pixels, thus more votes)
3. Fuzzy images were also no good, since the votes ended up in pixel at random locations around the center, thus ending with a very noisy heat map.
4. In-variance in translation and rotation was achieved, but not in orientation, meaning that a can that was not directly facing the camera objective wasn’t recognized.

Then, i decided to look for alternatives to my initial approach and solution that can take care of all the existing issues and challenges.
An alternative approach that i found was to extract features (keypoints) using the scale-invariant feature transform (SIFT) or Speeded Up Robust Features (SURF).
It is implemented starting OpenCV 2.3.1 but comes as Non_Free for OpenCV 3.0 and above.
Below, you will find a very nice explanation based code using features in Features2D + Homography to find a known object.
Both algorithms are invariant to scaling and rotation. Since they work with features, you can also handle occlusion as well (as long as enough keypoints are visible).

The processing takes a few hundred ms for SIFT, SURF is bit faster, but it not suitable for real-time applications.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
#include 
#include 
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/nonfree.hpp"
 
using namespace cv;
 
void readme();
 
/** @function main */
int main( int argc, char** argv )
{
  if( argc != 3 )
  { readme(); return -1; }
 
  Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
  Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
 
  if( !img_object.data || !img_scene.data )
  { std::cout<< " --(!) Error reading images " << std::endl; return -1; }
 
  //-- Step 1: Detect the keypoints using SURF Detector
  int minHessian = 400;
 
  SurfFeatureDetector detector( minHessian );
 
  std::vector keypoints_object, keypoints_scene;
 
  detector.detect( img_object, keypoints_object );
  detector.detect( img_scene, keypoints_scene );
 
  //-- Step 2: Calculate descriptors (feature vectors)
  SurfDescriptorExtractor extractor;
 
  Mat descriptors_object, descriptors_scene;
 
  extractor.compute( img_object, keypoints_object, descriptors_object );
  extractor.compute( img_scene, keypoints_scene, descriptors_scene );
 
  //-- Step 3: Matching descriptor vectors using FLANN matcher
  FlannBasedMatcher matcher;
  std::vector< DMatch > matches;
  matcher.match( descriptors_object, descriptors_scene, matches );
 
  double max_dist = 0; double min_dist = 100;
 
  //-- Quick calculation of max and min distances between keypoints
  for( int i = 0; i < descriptors_object.rows; i++ )
  { double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist;
  }
 
  printf("-- Max dist : %f \n", max_dist );
  printf("-- Min dist : %f \n", min_dist );
 
  //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
  std::vector< DMatch > good_matches;
 
  for( int i = 0; i < descriptors_object.rows; i++ )
  { if( matches[i].distance < 3*min_dist )
     { good_matches.push_back( matches[i]); }
  }
 
  Mat img_matches;
  drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
 
  //-- Localize the object
  std::vector obj;
  std::vector scene;
 
  for( int i = 0; i < good_matches.size(); i++ )
  {
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
  }
 
  Mat H = findHomography( obj, scene, CV_RANSAC );
 
  //-- Get the corners from the image_1 ( the object to be "detected" )
  std::vector obj_corners(4);
  obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
  obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
  std::vector scene_corners(4);
 
  perspectiveTransform( obj_corners, scene_corners, H);
 
  //-- Draw lines between the corners (the mapped object in the scene - image_2 )
  line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
  line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
  line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
  line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 
  //-- Show detected matches
  imshow( "Good Matches & Object detection", img_matches );
 
  waitKey(0);
  return 0;
  }
 
  /** @function readme */
  void readme()
  { std::cout << " Usage: ./SURF_descriptor  " << std::endl; }
How to detect a Christmas Tree using C++?
Simple Digit Recognition aka Optical Character Recognition(OCR) in OpenCV-C++




Leave a Reply

Your email address will not be published. Required fields are marked *

Welcome to OpenCV World !! Come as a Guest, stay as a Family