Posted on

Tongue tracking

Simek Miroslav

This project is focused on tracking tongue using just the information from plain web camera.  Majority of approaches tried in this project failed including edge detection, morphological reconstruction and point tracking because of various reasons like homogenous and position-variable character of tongue.

The approach that yields usable results is Farneback method of optical flow. By using this method we are able to detect the direction of movement in image and tongue specifically when we use it on image of sole mouth. However mouth area found by haar cascade classifier is very shaky so the key part is to stabilize it.

Functions used: calcOpticalFlowFarneback, CascadeClassifier.detectMultiScale

The process:

  1. Detection of face and mouth using haar cascade classifier where mouth is being searched in the middle of the area between nose and bottom of the face.
    faceCascade.detectMultiScale(frame, faces, 1.1, 3, 0, Size(200, 200), Size(1000, 1000));
    mouthCascade.detectMultiScale(faceMouthAreaImage, possibleMouths, 1.1, 3, 0, Size(50, 20), Size(250, 150));
    noseCascade.detectMultiScale(faceNoseAreaImage, possibleNoses, 1.1, 3, 0, Size(20, 30), Size(150, 250));
    
  2. Stabilization of mouth area on which optical flow will be used.
    const int movementDistanceThreshold = 40;
    const double movementSpeed = 0.25;
    
    int xDistance = abs(newMouth.x - mouth.x);
    int yDistance = abs(newMouth.y - mouth.y);
    
    if (xDistance + yDistance > movementDistanceThreshold)
    	moveMouthRect = true;
    
    if (moveMouthRect)
    {
    	mouth.x += (int)((double)(newMouth.x - mouth.x) * movementSpeed);
    	mouth.y += (int)((double)(newMouth.y - mouth.y) * movementSpeed);
    }
    
    if (xDistance + yDistance <= 1.0 / movementSpeed)
    	moveMouthRect = false;
    
  3. Optical flow (Farneback) of the current and previous stabilized frames from camera.
    cvtColor(img1, in1, COLOR_BGR2GRAY);
    cvtColor(img2, in2, COLOR_BGR2GRAY);
    calcOpticalFlowFarneback(in1, in2, opticalFlow, 0.5, 3, 15, 3, 5, 1.2, 0);
    

Limitation:

  • Head movements must be minimal to none to work correctly.
  • Actual position of tongue is unknown. What is being tracked is the direction of tongue’s movement in the moment when the tongue moved.

Samples:

Simek_tongue

Posted on

Tracking the movement of the lips

Peter Demcak

In this project, we aim to recognize the gestures made by the users by moving their lips; Examples: closed mouth, mouth open, mouth wide open, puckered lips. The challenges in this task are the high homogeneity in the observed area, and the rapidity of lip movements. Our first attempts in detecting said gestures are based on the detection of the lip movements through flow with the Farneback method implemented in OpenCV, or alternatively the calculation of the motion gradient from a silhouette image. It appears, that these methods might not be optimal for the solution of this problem.

OpenCV functions: cvtColor, Sobel, threshold, accumulateWeighted, calcMotionGradient, calcOpticalFlowPyrLK

Process

  1. Detect the position of the largest face in the image using OpenCV cascade classifier. Further steps will be applied using the lower half of the found face.
    faceRects = detect(frame, faceClass);
    
  2. Transform the image map to HLS color space, and obtain the luminosity map of the image
  3. Combine the results of horizontal and vertical Sobel methods to detect edges of the face features.
    Sobel(hlsChannels[1], sobelVertical, CV_32F, 0, 1, 9);
    Sobel(hlsChannels[1], sobelHorizontal, CV_32F, 1, 0, 9);
    cartToPolar(sobelHorizontal, sobelVertical, sobelMagnitude, sobelAngle, false);
    
  4. Add accumulative edge detection frame images on top of each other to obtain the silhouette image. To prevent raised  noise in areas without edges, apply a threshold to the Sobel map.
    threshold(sobelMagnitude, sobelMagnitude, norm(sobelMagnitude, NORM_INF)/6, 255, THRESH_TOZERO);
    accumulateWeighted(sobelMagnitude, motionHistoryImage, intensityLoss);
    
  5. Calculate the flow using the Farneback method implemented in OpenCV using the current and previous frame
    calcOpticalFlowFarneback(prevSobel, sobelMagnitudeCopy, flow, 0.5, 3, 15, 3, 5, 1.2, 0);
    
Posted on

Sky detection using Slic superpixels

Juraj Kostolansky

This project tries to solve the problem of sky detection using the Slic superpixel segmentation algorithm.

Analysis

The first idea was to use Slic superpixel algorithm to segment an input image and merge pairs of adjecent superpixels based on their similarity. We created a simple tool to manually evaluate the hypothesis that a sky can be separated from a photo with one threshold. In this prototype, we compute the similarity between superpixels as an Euclidean distance between their mean colors in the RGB color space. For the most of images from our dataset we found a threshold which can be used for sky segmentation process.

Next, we analyzed colors of images from our dataset. For each image we saved superpixel colors for sky and the rest of the image in three color spaces (RGB, HSV and Lab) and we plotted them. The resulting graphs are shown below (the first row of graphs represents sky colors, the second row represents colors of the rest of an image). As we can see, the biggest difference is in HSV an Lab color spaces. Based on this evaluation, we choosed Lab as a base working color space to compare superpixels.

RGB

kostolansky_rgb

HSV

kostolansky_hsv

Lab
kostolansky_lab

Final algorithm

  1. Generating superpixels using Slic algorithm
  2. Replacing superpixels with their mean color values
  3. Setting threshold:
    T = [ d1 + (d2 - d1) * 0.3 ] * 1.15
    

    where:

    • d1 – average distance between superpixels in the top 10% of an image
    • d2 – average distance between superpixels in an image without â…“ smallest distances

    The values 0,3 and 1,15 was choosed for best (universal) results for our dataset.

  4. Merging adjecent superpixels
  5. Choosing sky – superpixel with the largest number of pixels in the first row
  6. Draw sky border (red)

Sample

kostolansky_a

kostolansky_b

kostolansky_c

kostolansky_d

Posted on

Cars detection

Adrian Kollar

This project started with car detection using Haar Cascade Classifier. Then we focused on eliminating false positive results by using road detection. We tested the solution on a recorded video, which was obtained with a car camera recorder.

Functions used: cvtColor, canny, countNonZero, threshold, minMaxLoc, split, pow, sqrt, detectMultiScale

The Process

  1. Capture road sample every n-th frame, by capturing rectangle positioned statically in the frame (white rectangle in the examples). Road sample shouldn’t contain line markings. We used canny and countNonZero to avoid line marking.

    kollar_samples
    Road samples
  2. Calculate average road color from captured road samples

    kollar_avg_color
    Average road color
  3. Convert image and average road sample to LAB color space.
  4. For each pixel from the input image, calculate:kollar_equation

    where L, A, B are values from the input image and l, a, b are values from average road sample.

  5. Binarize the result by using threshold function.

Example

Kollar_car_detection
Input image, car detected is in red rectangle
kollar_detection
Road detection
Posted on

Bag of visual words in OpenCV

Jan Kundrac

Bag of visual words (BOW) representation was based on Bag of words in text processing. This method requires following for basic user:

  • Image dataset splitted into image groups, or
  • precomputed image dataset and group histogram representation stored in .xml or .yml file (see XML/YAML Persistence chapter in OpenCV documentation)
  • at least one image to compare via BOW

Image dataset is stored in folder (of any name) with subfolders named by group names. In the subfolders there are images for current group stored. BOW should generate and store descriptors and histograms into specified output .xml or .yml file.

BOW works as follows (compare with Figure 1 and 2):

  • compute visual word vocabulary with k-means algorithm (where k is equivalent with count of visual words in vocabulary). Vocabulary is stored into output file. This should take about 30 minutes on 8 CPU cores when k=500 and image count = 150. OpenMP is used to improve performance.
  • compute group histograms (there are 2 methods implemented for this purpose – median and average histogram, only median is used because of better results). This part requires vocabulary computed. Group histogram is normalized histogram, this means sum of all columns within the histogram equals 1.
  • compute histogram for picture on input and compare it with all group histograms to realize which group image belongs to. This was implemented as histogram intersection.

As seen in Figure 2, whole vocabulary and group histogram computation may be skipped if they were already computed.

BOW
Figure 1: BOW tactic
kandrac_bow_flowchart
Figure 2: Flowchart for whole BOW implementation

For usage simplification I have implemented BOWProperties class as singleton, which holds basic information and settings like BOWDescriptorExtractor, BOWTrainer, reading images as grayscaled images or method for obtaining descriptors (SIFT and SURF are currently implemented and ready to use). Example of implementation is here:

BOWProperties* BOWProperties::setFeatureDetector(const string type, int featuresCount)
{
	Ptr<FeatureDetector> featureDetector;
	if (type.compare(SURF_TYPE) == 0)
	{
		if (featuresCount == UNDEFINED) featureDetector = new SurfFeatureDetector();
		else featureDetector = new SurfFeatureDetector(featuresCount);
	}
	...
}

This is how all other properties are set. The only thing that user have to do is simply set properties and run classification.

There is in most cases single DataSet object holding reference to groups and some Group objects that holds references to images in the group in my implementation. Training implementation:

DataSet part :

void DataSet::trainBOW()
{
	BOWProperties* properties = BOWProperties::Instance();
	Mat vocabulary;
	// read vocabulary from file if not exists compute it
	if (!Utils::readMatrix(properties->getMatrixStorage(), vocabulary, "vocabulary"))
	{
		for each (Group group in groups)
			group.trainBOW();
		vocabulary = properties->getBowTrainer()->cluster();
		Utils::saveMatrix(properties->getMatrixStorage(), vocabulary, "vocabulary");
	}
	BOWProperties::Instance()
		->getBOWImageDescriptorExtractor()
		->setVocabulary(vocabulary);
}

Group part (notice OpenMP usage for parallelization):

unsigned Group::trainBOW()
{
	unsigned descriptor_count = 0;
	Ptr<BOWKMeansTrainer> trainer = BOWProperties::Instance()->getBowTrainer();
	
	#pragma omp parallel for shared(trainer, descriptor_count)
	for (int i = 0; i < (int)images.size(); i++){
		Mat descriptors = images[i].getDescriptors();
		#pragma omp critical
		{
			trainer->add(descriptors);
			descriptor_count += descriptors.rows;
		}
	}
	return descriptor_count;
}

This part of code generates and stores vocabulary. The getDescriptors() method returns descriptors for current image via DescriptorExtractor class. Next part shows how the group histograms are computed:

void Group::trainGroupClassifier()
{
	if (!Utils::readMatrix(properties->getMatrixStorage(), groupClasifier, name))
	{
		groupHistograms = getHistograms(groupHistograms);
		medianHistogram = Utils::getMedianHistogram(groupHistograms, groupClasifier);
		Utils::saveMatrix(properties->getMatrixStorage(), medianHistogram, name);
	}
}

Where getMedianHistogram() method generates median histogram from histograms that are representing each image in current group.

Now the vocabulary and histogram classifiers are computed and stored. Last part is comparing new image with the classifiers.

Group DataSet::getImageClass(Image image)
{
	for (int i = 0; i < groups.size(); i++)
	{
		currentFit = Utils::getHistogramIntersection(groups[i].getGroupClasifier(), image.getHistogram());
		if (currentFit > bestFit){
			bestFit = currentFit;
			bestFitPos = i;
		}
	}
	return groups[bestFitPos];
}

The returned group is group where image most possibly belongs. Nearly every piece of code is little bit simplified but shows basic thoughts. For more detailed code, see sources.


(For complete code see this GitHub repositoryhttps://github.com/VizGhar/BOW/tree/develop)

[1] http://docs.opencv.org/

[2] http://www.morethantechnical.com/2011/08/25/a-simple-object-classifier-with-bag-of-words-using-opencv-2-3-w-code/

[3] http://gilscvblog.wordpress.com/2013/08/23/bag-of-words-models-for-visual-categorization/

Posted on

Visual Finger Counter

Gabriela Brndiarova

Aim of this project was implementing of finger counter with OpenCV. Input from ordinary webcam was used and it is possible to get realtime results, now. At first, we segmented hand using camshift algorithm. After that, we got hand contours and convexity defect. There was used very simple algorithm to count fingers when the convexity defects were known.

Function used: calcHist, calcBackProject, CamShift, threshold, morphologyEx, findContours, convexHull, convexityDefects

Input

brndiarova_input

The process

  1. Selecting a litle square on the hand with the mouse.
  2. Calculating histogram from selected range.
    Mat frame, hsv, hue, mask, hist = Mat::zeros(200, 320, CV_8UC3);
    inRange(hsv, Scalar(0, smin, 10), Scalar(180, 256, 256), mask); 
    hue.create(hsv.size(), hsv.depth());
    mixChannels(&hsv, 1, &hue, 1, ch, 1);
    Mat roi(hue, selection), maskroi(mask, selection);
    calcHist(&roi, 1, 0, maskroi, hist, 1, &hsize, &phranges);
    
  3. Getting back projection of image.
    calcBackProject(&hue, 1, 0, hist, backproj, &phranges);
    
  4. Camshift application to get selection of hand.
    CamShift(backproj, trackWindow, TermCriteria( CV_TERMCRIT_EPS | V_TERMCRIT_ITER, 10, 1 ));
    
  5. Manual making the selection bigger (it is important to have whole fingers in the selection), but not too big because of face. If face is in the selection, hand segmentation is no longer possible.
  6. The selection of hand is cut off and rotate to natural position for human (fingers point to the top).
    int angle = trackBox.angle;
    Size rect_size = trackBox.size;
    if (angle >90){
    	angle -= 180;
    	angle *= -1;
    }
    M = getRotationMatrix2D(trackBox.center, angle, 1.0);
    warpAffine(backprojMask, rotatedMask, M, backprojMask.size(), INTER_CUBIC);
    getRectSubPix(rotatedMask, rect_size, trackBox.center, croppedMask);
    
  7. Treshold application.
    threshold(croppedMask, croppedMask, tresholdValue, 255 , THRESH_BINARY);
    
  8. Morphology closing application.
    Mat structElem = getStructuringElement(MORPH_ELLIPSE, Size(elemSize,elemSize));
    morphologyEx(croppedMask, croppedMask, MORPH_CLOSE, structElem);
    
  9. Getting all contours and selecting the longest of them – contour of hand. Short contours are just contours of some kind of noise.
    vector<vector<Point> > contours;
    vector<Vec4i> hierarchy;
    findContours(croppedMask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
    
    double largestArea = 0.0;
    int largestContourIndex = 0;
    for( int i = 0; i< contours.size(); i++ ){
           double a=contourArea( contours[i],false); 
           if(a>largestArea){
    		largestArea=a;
    		largestContourIndex=i;           		  
           }
    }
    
  10. Getting convexity defects.
    vector<vector<int> > hulls (1);
    convexHull(contours[largestContourIndex], hulls[0], false, false);
    std::vector<Vec4i> defects;
    convexityDefects(contours[largestContourIndex], hulls[0], defects);
    
  11. Counting fingers using convexity defects. We do not count too small convexity defects, defects with too long distance between start and end point and too small distance, too. This is the way how to filter defects between fingers. Number of finger is always number of defects plus 1. It is not the best way but for purposes of this project it is good.
    int fingerCount = 1;
    for (int i = 0; i< defects.size(); i++){
    	int start_index = defects[i][0];
    	CvPoint start_point = contours[largestContourIndex][start_index];
    	int end_index = defects[i][1];
    	CvPoint end_point = contours[largestContourIndex][end_index];
    	double d1 = (end_point.x - start_point.x);
    	double d2 = (end_point.y - start_point.y);
    	double distance = sqrt((d1*d1)+(d2*d2));
    	int depth_index = defects[i][2];
    	int depth =  defects[i][3]/1000;
    
    	if (depth > 10 && distance > 2.0 && distance < 200.0){
    		fingerCount ++;
    	}
    }
    
  12. Previous steps are running really fast so it is not possible to show new result after every single iteration. The result can be change because of small mistake or noise. This is reason why we decided to show average value of last 15 cycles as result.
    countValue[iCV%15] = itCount;
    iCV++;
    
    int count = 0;
    for (int i=0; i<15; i++){
    	count += countValue[i];
    }
    count = count/15;
    
    stringstream ss;
    ss << count;
    string str = ss.str();
    Point textOrg(10, 130);
    putText(input, str, textOrg, 1, 3, Scalar(0,255,255), 3);
    

Sample

brndiarova_back_projection
Back projection
brndiarova_camshift
CamShift
brndiarova_threshold
Treshold and morfology closing
brndiarova_convexity
Convexity defects

Result
brndiarova_result

Posted on

TranSign, Android Sign Translator

This project shows text extraction from the input image. It is used for road sign texts translations. First, the image is preprocessed using OpenCv functions and than the text from road sign is detected and extracted.

Input

The process

  1. Image preprocessing
    Imgproc.cvtColor(img, img, Imgproc.COLOR_BGR2GRAY);
    Imgproc.GaussianBlur(img, img, new Size(5,5), 0);
    Imgproc.Sobel(img, img, CvType.CV_8U, 1, 0, 3, 1, 0);
    Imgproc.threshold(img, img, 0, 255, Imgproc.THRESH_OTSU+THRESH_BINARY);
    
  2. Contour detection
    List<MatOfPoint> contours;
    Imgproc.findContours(img, contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
    
  3. Deleting contours on edges, small contours, wrong ratio contours and wrong histogram contours
  4. Preprocessing before extraction
  5. Extraction
    TessBaseAPI baseApi = new TessBaseAPI();
    baseApi.init(TESSBASE_PATH, DEFAULT_LANGUAGE);
    baseApi.setImage(bm);
    String resultParcial;
    
  6. Translation

Sample

Preprocessing – converting to greyscale, Gaussian blurring, Sobel, binary threshold + Otsu’s, morphological closing
Contour detection and deleting wrong contours
Preprocessing before extraction
Extraction
Translation
Posted on

Extracting the position of game board & recognition of game board pieces

This project focuses on the usage of computer vision within the field of board games. We propose a new approach for extracting the position of game board, which consists of the detection of empty fields based on the contour analysis and elipse fitting, locating the key points by using probabilistic Hough lines and of finding the homography by using these key points.

Functions used: Canny, findContours, fitEllipse, HoughLinesP, findHomography, warpPerspective, chamerMatching

Input

The process

  1. Canny edge detector
    Mat canny;
    Canny(img, canny, 100, 170, 3);
    
  2. Contour analysis – extraction contours and filtering out those that don’t match our criteria
    vector<vector<Point>> contours;
    vector<Vec4i> hierarchy;
    findContours(canny, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
    
  3. Ellipse fitting – further analysis of contours, final extraction of empty fields
    RotatedRect e = fitEllipse(contours[i]);
    
  4. Extraction of the game board model – 4 key points are needed for locating this model
  5. Locating the key points in the input image – using Hough lines & analysing their intersections
    Mat grayCpy;
    vector<Vec4i>& lines;
    HoughLinesP(grayCpy, lines, 1, CV_PI/180, 26, 200, 300);
    
  6. Finding the homography and final projection of the game board model into the input image
    findHomography(Mat(modelKeyPoints), Mat(keyPoints));
    warpPerspective(modelImg, newImg, h, Size(imgWithEmptyFieldsDots.cols, imgWithEmptyFieldsDots.rows), CV_INTER_LINEAR + CV_WARP_FILL_OUTLIERS);
    chamerMatching(canny, piece, results, costs, 1.0, 30, 1.0, 3, 3, 5, 0.9, 1.1);
    

Sample

Canny detector
Finding contours
Ellipse fitting I
Ellipse fitting II
Finding four key points
Probabilistic hough lines
Finding homography

Result

Projection of the game board model into the input image
Mat findImageContours(const Mat& img, vector<vector<Point> >& contours, vector<Vec4i>& hierarchy)
{
    // detect edges using canny:
    Mat canny;
    Canny(img, canny, 100, 170, 3);

    findContours(canny, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);

    // draw contours:   
    Mat imgWithContours = Mat::zeros(canny.size(), CV_8UC3);
    for (unsigned int i = 0; i < contours.size(); i++)
    {
        // process "holes" only:
        if (hierarchy[i][3] == -1) continue;
        // apply ratio + size + contourArea filters:
        if (!checkContour3(contours[i])) continue;

        // fit and draw ellipse:
        RotatedRect e = fitEllipse(contours[i]);
        if (e.size.height < 50)
        {
            line(imgWithContours, e.center, e.center, Scalar(255, 255, 255),3);
        }
    }
    return imgWithContours;
}
Posted on

Detection of map contour lines

This project shows a possible way of finding contour lines on maps. These properties of the contour lines are considered here:

  • contour lines are closed or they end at the edges of the map,
  • in some sections more neighbor contour lines are nearly parallel,
  • they are mainly slightly curved only (the lines do not have large angles like roads or buildings).

The algorithm uses the OpenCV library.

Functions used: cv::medianBlur, cv::Sobel, cv::magnitude

The process

  1. Image preprocessing – using median blur
    cv::Mat bl;
    cv::medianBlur(input, bl, params_.medianBlurKSize);
    
  2. Detecting lines and their directions – using Sobel filter (magnitudes are obtained using the magnitude function and directions are computed using atan2 from horizontal and vertical gradients)
    cv::Mat_<double> grad_x;
    cv::Sobel(beforeSobel, grad_x, CV_64F, 1, 0, params_.sobelKSize);
    cv::Sobel(beforeSobel, grad_y, CV_64F, 0, 1, params_.sobelKSize);
    
  3. Finding some contour line seeds – points at lines with approximately equal directions.
    cv::Mat_<double> magnitude;
    cv::magnitude(grad_x, grad_y, magnitude);
    
  4. Tracing lines beginning at the seeds – we are going from each seed to both directions to find the line while checking if the curves do not exceed a threshold (the more curved lines are probably not the contour lines).
  5. Filtering of the traced lines – only the lines having both ends at the image boundaries or the closed lines are considered as the map contour lines.
Input image.
Finding some contour line seeds.
Result – contour lines detected.

The result image shows a map with some contour lines detected. The seeds and line points are marked as follows:

  • yellow – seed points
  • red – closed line points
  • green – points of the first part of a line ending at the image edge
  • blue – points of the second part of a line ending at the image edge

Problems and possible improvements

These algorithm properties cause problems and need to be considered in the algorithm improvements:

  • line intersections are not being detected – one line from each pair of the intersecting lines should always be removed,
  • the algorithm uses a global magnitude threshold (the threshold determines if a point belongs to a line), but the line intensities change in most images,
  • the algorithm has too many parameters which were not generalized to match more possible images,
  • some contour lines are not continuous (they are splitted by labels) and thus not being detected by the algorithm.

Posted on

Object recognition (RANSAC verification)

This project shows object recognition using local features-based methods. We use four methods for keypoints detection and description: SIFT/SIFT, SURF/SURF, FAST/FREAK and ORB/ORB. Keypoints are used to compute homography. Object is located in scene with RANSAC algorithm. RGB and hue-saturation histograms are used for RANSAC verification.

Functions used: FeatureDetector::detect, DescriptorExtractor::compute, knnMatch, findHomography, warp, calcHist, compareHist

Input

The process

  1. Keypoints detection
    FeatureDetector * detector;
    detector = new SiftFeatureDetector();
    detector->detect( image, key_points_image );
    
    DescriptorExtractor * extractor;
    extractor = new SiftDescriptorExtractor();
    extractor->compute( image, key_points_image, des_image );
    
  2. Keypoints description
  3. Keypoints matching
    DescriptorMatcher * matcher;
    matcher = new BruteForceMatcher<L2<float>>();
    matcher->knnMatch(des_object, des_image, matches, 2);
    
  4. Calculating homography
    findHomography( obj, scene, CV_RANSAC );
    
  5. Histograms matching
    calcHist( &hsv_img_object, 1, channels, Mat(), hist_img_object, 2, histSize, ranges, true, false );
    compareHist( b_hist_object, b_hist_quad, CV_COMP_BHATTACHARYYA );
    
  6. Outline recognized object

Sample

Detecting keypoints
Finding matches
Object recognition and RANSAC verification (green outline)
Object recognition and RANSAC failure (red outline)
drawMatches( gray_object, key_points_object, image,
             key_points_image, good_matches, img_matches,
             Scalar::all(-1), Scalar::all(-1), vector<char>(),
             DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

	if (good_matches.size() >= 4)
	{
	for( int i = 0; i < good_matches.size(); i++ )
	{
	//
	obj.push_back( key_points_object[ good_matches[i].queryIdx ].pt );
	scene.push_back( key_points_image[ good_matches[i].trainIdx ].pt );
	}

	H = findHomography( obj, scene, CV_RANSAC );

	perspectiveTransform( obj_corners, scene_corners, H);
	//*******************************************************

	Mat quad = Mat::zeros(rgb_object.rows, rgb_object.cols,
                   CV_8UC3);

	//warping object back to tamplate rotation
	warpPerspective(frame, quad, H.inv(), quad.size());

	...
Posted on

Euro money bill recognition

The project shows detection and recognition of euro money bill from input image (webcam). For each existing euro money bill is chosen template that contains number value of bill and also its structure. For matching templates with input images is used Flann Based matcher of local descriptors extracted by SURF algorithm.

Functions used: medianBlur, FlannBasedmatcher, SerfFeatureDetector, SurfDescriptorExtractor, findHomography

Process

  1. Preprocessing – Conversion to grayscale + median filter
    cvtColor(input_image_color, input_image, CV_RGB2GRAY);
    medianBlur(input_image, input_image, 3);
    
  2. Compute local descriptors
    SurfFeatureDetector detector( minHessian );
    vector<KeyPoint> template_keypoints;
    detector.detect( money_template, template_keypoints );
    SurfDescriptorExtractor extractor;
    extractor.compute( money_template, template_keypoints, template_image );
    detector.detect( input_image, input_keypoints );
    extractor.compute( input_image, input_keypoints, destination_image );
    
  3. Matching local descriptors
    FlannBasedMatcher matcher;
    matcher.knnMatch(template_image, destination_image, matches, 2);
    
  4. Finding homography and drawing output
    Mat H = findHomography( template_object_points, input_object_points, CV_RANSAC );
    perspectiveTransform( template_corners, input_corners, H);
    drawLinesToOutput(input_corners, img_matches, money_template.cols);
    

Sample

Matching local descriptors
Result – identified object
Posted on

Detection of cities and buildings in the images

Project is focused on the image detection which major components are cities and buildings. Buildings and cities detection assumes occurence of the edges as implication of the Windows and walls, as well as presence of the sky. Algorithm creates the feature vector with SVM classification algorithm.

Functions used: HoughLinesP, countNonZero, Sobel, threshold, merge, cvtColor, split, CvSVM

The process

  1. Create edge image
    cv::Sobel(intput, grad_x, CV_16S, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
    cv::Sobel(intput, grad_y, CV_16S, 0, 1, 3, 1, 0, cv::BORDER_DEFAULT);
    
  2. Find lines in the binary edge image
    cv::HoughLinesP(edgeImage, edgeLines, 1, CV_PI / 180.0, 1, 10, 0);
    
  3. Count numbers of lines in specified tilt
  4. Convert original image to HSV color space and remove saturation and value
    cv::cvtColor(src, hsv, CV_BGR2HSV);
    
  5. Process the image from top to bottom , if pixel is not blue then all pixels under him are not sky
  6. Classification with SVM
    CvSVMParams params;
    params.svm_type  = CvSVM::C_SVC;
    params.kernel_type = CvSVM::LINEAR;
    params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 5000, 1e-5);
    
    float OpencvSVM::predicate(std::vector<float> features)
    {
       std::vector<std::vector<float> > featuresMatrix;
       featuresMatrix.push_back(features);
       cv::Mat featuresMat = createMat(featuresMatrix);
       return SVM.predict(featuresMat);
    }
    

Example

Original image
Edge image
Highlighted image
Hue factor
Detected sky