This project shows text extraction from the input image. It is used for road sign texts translations. First, the image is preprocessed using OpenCv functions and than the text from road sign is detected and extracted.
This project focuses on the usage of computer vision within the field of board games. We propose aÂ new approach for extracting the position of game board, which consists of the detection of empty fields based on the contour analysis and elipse fitting, locating the key points by using probabilistic Hough lines and of finding the homography by using these key points.
Tracing lines beginning at the seeds – we are going from each seed to both directions to find the line while checking if the curves do not exceed a threshold (the more curved lines are probably not the contour lines).
Filtering of the traced lines – only the lines having both ends at the image boundaries or the closed lines are considered as the map contour lines.
The result image shows a map with some contour lines detected. The seeds and line points are marked as follows:
yellow – seed points
red – closed line points
green – points of the first part of a line ending at the image edge
blue – points of the second part of a line ending at the image edge
Problems and possible improvements
These algorithm properties cause problems and need to be considered in the algorithm improvements:
line intersections are not being detected – one line from each pair of the intersecting lines should always be removed,
the algorithm uses a global magnitude threshold (the threshold determines if a point belongs to a line), but the line intensities change in most images,
the algorithm has too many parameters which were not generalized to match more possible images,
some contour lines are not continuous (they are splitted by labels) and thus not being detected by the algorithm.
This project shows object recognition using local features-based methods. We use four methods for keypoints detection and description: SIFT/SIFT, SURF/SURF, FAST/FREAK and ORB/ORB. Keypoints are used to compute homography. Object is located in scene with RANSAC algorithm. RGB and hue-saturation histograms are used for RANSAC verification.
We detect the gesture of the opened and closed hand with sensor Kinect. State of the hand was divided into 2 parts, when it is opened (palm) or closed (fist). We assume that hand is rotated in a parallel way with the sensor and is captured her profile.
Get point in the middle of the hand and limit around her window
Point pointHand(handFrameSize.width, handFrameSize.height);
Rect rectHand = Rect(pos - pointHand, pos + pointHand);
Mat depthExtractTemp = depthImageGray(rectHand); //extract hand image from depth image
Mat depthExtract(handFrameSize.height * 2, handFrameSize.width * 2, CV_8UC1);
Find the minimum depth value in the window
int tempDepthValue = getMinValue16(depthExtractTemp);
Convert window from 16bit to 8bit Â and use as mean value of the minimum depth
The project shows detection and recognition of euro money bill from input image (webcam). For each existing euro money bill is chosen template that contains number value of bill and also its structure. For matching templates with input images is used Flann Based matcher of local descriptors extracted by SURF algorithm.
Project is focused on the image detection which major components are cities and buildings. Buildings and cities detection assumes occurence of the edges as implication of the Windows and walls, as well as presence of the sky. Algorithm creates the feature vector with SVM classification algorithm.
Recognition of the car and finding its plate is popular theme for school projects and there are also many commercial systems. This project shows how you can recognize cars and its plate from video record or live stream. After a little modification it can by used to improve some parking systems. Idea of this algorithm is absolute different between frames and lot of testing.
The goal of this project is to implement algorithm that extract similar points or whole regions from two different images of the same building using OpenCV library and especially MSER algorithm (Maximally stable extremal regions). Images of building are taken in different time and have different hue, saturation, light and other conditions.
Based on the extracted regions, algorithm finds the same centers of key regions and merged images by these points to create aÂ complete images of building with the presentation of its historical changes.
In computer vision and object recognition, we have three main areas â€“ object classification, detection and segmentation. Classification task deals only with assigning an image to a class (for example bicycle, dog, cactus, etcâ€¦), detection task moreover deals with detecting the position of the object in an image and segmentation task deals with finding the detailed contours of the object. Bag of words is a method which belongs to classification problem.
Compute the histogram that counts how many times each centroid occurred in each image. To compute the histogram find the nearest centroid for each local feature vector.
We trained our model on 240 different images from 3 different classes â€“ bonsai, Buddha and porcupine. We then computed the following histogram which counts how many times each centroid occurred in each image. To find the values of the histogram we had to compare the distances of each local feature vector with each centroid and centroid with least difference to local feature vector has incremented in histogram. We used 1000 cluster centers.