Posted on

Medical image segmentation

Martin Tamajka

In this project, our goal was to apply image segmentation techniques to dense volume of standard medical data.

Oversegmentation

Our method is based on oversegmentation to supervoxels (similar to superpixels, but in 3D volume). Such oversegmentation dramatically decreases processing time and has many other advantages over working directly with voxels. Oversegmentation is done using SLIC algorithm (http://www.kev-smith.com/papers/SLIC_Superpixels.pdf). Implementation that we use was created by authors and does not depend on any other library. This comes at cost of necessity to transform images in OpenCV format to C++ arrays. SLIC allows to choose between supervoxel compactness (or regularity of shape) and intensity homogeneity. In our work, we decided to prefer homogeneity over regularity, because different tissues in anatomical organs have their typical intensities.

tamajka_oversegmentation
Oversegmented MRI slice – it can be seen that supervoxels adhere boundaries.

Merging supervoxels

 

After we oversegmented images, we created object representation of volume. Our object representation (SLIC3D) has following attributes:

vector<Supervoxel*>				m_supervoxels;
std::unordered_map<int, Supervoxel*>	m_supervoxelsMap;
int	m_height;
int	m_width;
int	m_depth;

The most important is the vector of Supervoxel pointers. Supervoxel is our basic class providing important information about contained voxels, can generate features that can be used in classification process and (very important) knows its neighbouring supervoxels. Currently, Supervoxel can generate 3 kinds of features:

float Supervoxel::AverageIntensity()
{
	return m_centroid->intensity;
}

float Supervoxel::AverageQuantileIntensity(float quantile)
{
	assert(quantile >= 0 && quantile <= 1);

	float intens = 0;

	int terminationIndex = quantile * m_points.size();
	for (int i = 0; i < terminationIndex; i++)
		intens += m_points[i]->intensity;

	return intens / terminationIndex;
}

float Supervoxel::MedianIntensity()
{
	return m_points[m_points.size() / 2]->intensity;
} 

SLIC3D class (the one containing supervoxels) has method where “all magic happens” - MergeSimilarSupervoxels. As stated in its name, method merges voxels. Method performs given number of iterations. In each iteration, method takes random supervoxels, compares it with its neighbours and if a neighbour and examined supervoxels have similar average intensity, the latter one is merged to the examined one. Code can be seen right below.

bool SLIC3D::MergeSimilarSupervoxels()
{
	vector<Supervoxel*> supervoxelsToBeErased;
	for (int i = 0; i < 10000; i++)
	{
		cout << i << " " << m_supervoxels.size() << endl;
		std::sort(m_supervoxels.begin(), m_supervoxels.end(), helper_sortFunctionByAvgIntensity);
		
		Supervoxel* brightestSupervoxel = m_supervoxels[rand() % (m_supervoxels.size() - 1)];
		std::unordered_map<int, Supervoxel*> nb = *(brightestSupervoxel->GetNeighbours());

		int numberOfIterations = 0;
		vector<int> labelsToBeErased;
		for (auto it = nb.begin(); it != nb.end(); it++)
		{
			if (std::min(brightestSupervoxel->AverageIntensity(), it->second->AverageIntensity()) / std::max(brightestSupervoxel->AverageIntensity(), it->second->AverageIntensity()) > 0.95)
			{
				helper_mergeSupervoxels(brightestSupervoxel, it->second);
				labelsToBeErased.push_back(it->second->GetLabel());
				supervoxelsToBeErased.push_back(it->second);
			}
			else
			{
				//cout << "nope" << endl;
			}
			numberOfIterations++;
		}

		for (int i = 0; i < labelsToBeErased.size(); i++)
		{
			Supervoxel* toBeRemoved = m_supervoxelsMap[labelsToBeErased[i]];

			if (NULL == toBeRemoved)
				continue;

			std::unordered_map<int, Supervoxel*> nbb = *(toBeRemoved->GetNeighbours());

			for (auto it = nbb.begin(); it != nbb.end(); it++)
			{
				try
				{
					it->second->AddNeighbour(brightestSupervoxel);
					it->second->RemoveNeighbour(toBeRemoved);
					brightestSupervoxel->AddNeighbour(it->second);
				}
				catch (Exception e)
				{
					cout << "exc: " << e.msg << endl;
				}
			}
		}
		
		for (int i = 0; i < labelsToBeErased.size(); i++)
		{
			Supervoxel* toBeRemoved = m_supervoxelsMap[labelsToBeErased[i]];
			m_supervoxelsMap.erase(labelsToBeErased[i]);
			//delete toBeRemoved; //COMMENT
		}

		brightestSupervoxel->RecalculateCentroid();

		m_supervoxels.clear();
		for (auto it = m_supervoxelsMap.begin(); it != m_supervoxelsMap.end(); ++it)
		{
			m_supervoxels.push_back(it->second);
		}
	}

	for (int i = 0; i < supervoxelsToBeErased.size(); i++)
		;// delete supervoxelsToBeErased[i];	//COMMENT - to be considered if delete

	return true;
} 

Results of merging in such form highly depend on chosen similarity level. In the picture below left we can see result of applying similarity level 0.95. In the image right the value of similarity level was chosen to be 0.65.

tamajka_results

We also tried to train SVM to classify brain and non-brain structures using just these features. We got 4 successful classifications of 5. With classification we will continue later.

Posted on

Split and merge

Matus Pikuliak

In our work we have implemented segmentation algorithm split-and-merge. We have designed each step of this algorithm processing original image into segmented image composed of homogeneous regions. We have used OpenCV library to build our solution.

Our method has 4 steps. We will demonstrate their effect on image depicted in Figure 1.

pikuliak_original
Original photo
  1. Pre-processing
    First we convert image from RGB color space to Lab. Then we blur the image to remove various blemishes, that could potentially halt the splitting phase

    cv::cvtColor(image, image, CV_BGR2Lab);
    cv::GaussianBlur(image, image, Size(5,5), 0, 0);
    
  2. Splitting
    In splitting phase we divide image to same-sized quarters. We compute the standard deviation of each of the dimensions of color space. If these values for given quarter exceed adjustable thresholds, we recursively divide this quarter as well. This go on, until the created quarters are big enough. If they one of their dimensions is smaller than what we have set as minimum length (9), dividing is stopped.
    If the values of deviations does not exceed thresholds, dividing is stopped and given quarter is completely filled with its mean color. Examples of splitting can be seen on Figure 2.

    pikuliak_splitting
    After splitting.
    void split(roi) {
    	if (roi.standard_deviation > threshold) {
    		quarters = roi.get_quarters
    			quarters.each{
    			split(quarter)
    		}
    	}
    	else {
    		roi.paint
    	}
    }
    
  3. Merging
    Split image is subsequently processed with merge operation. This operation merges neighboring regions with similar features. In our work we use only the dimensions of Lab color space and therefor we are merging regions with similar color. The similarity is again calculated using thresholds for individual dimensions of color space. Result of this operation can be seen on Figure 3. We can see that a lot of rectangular regions were merged into bigger super-regions. This effect can be seen on every part of the image.

    pikuliak_merging
    After merging.
    // runs floodFill for every unaffected ROI
    void merge() {
    	imageorg = image.clone();
    	for (std::list<list_item>::iterator it = rois.begin(); it != rois.end(); it++) {
    		Vec3b c = image.at<Vec3b>(it->point);
    		Vec3b c2 = imageorg.at<Vec3b>(it->point);
    		if (c[0] == c2[0] && c[1] == c2[1] && c[2] == c2[2]) {
    			floodFill(image, it->point, Scalar(c[0], c[1], c[2]), 0,
    				Scalar(l_treshold, ab_treshold, ab_treshold),
    				Scalar(l_treshold, ab_treshold, ab_treshold));
    		}
    	}
    }
    
  4. Post-processing
    As we can see there are some artifacts left, mainly on the edges of different bigger regions. These artifacts are result of splitting, which can not handle parts of images, where there are sudden changes from one dominant color to other. In order to remove these artifacts and improve overall we are using several morphological operations followed with yet another merging of color regions. Result of post-processing can be seen on Figure 4, which is at the same time final result of our method for this image.

    pikuliak_final
    Final image.
    Mat kernel = getStructuringElement(MORPH_ELLIPSE,Size(Morph, Morph));
    dilate(image, image, kernel,Point(-1,-1),2);
    erode(image, image, kernel,Point(-1,-1),2);
    erode(image, image, kernel,Point(-1,-1),2);
    dilate(image, image, kernel,Point(-1,-1),2);
    

Settings

There are several settings that can affect the result of our method:

  • color dimensions – we can change how much is our method sensitive to changes in individual dimensions of our color space. Higher sensitivity leads to smaller regions after split phase and worse merging. Lower sensitivity can on the other hand lead to results which are too rough.
  • minimum size of region – this setting can affect resulting granularity of final image. Too small minimum regions lead to over-split image, while too large lead to inaccurate results.
  • color space – color space is another setting that can change the outcome of our method. In our experiments we find out, that the Lab color space outperforms RGB color space in generality. Lab was providing satisfactory results to almost all of our testing images with similar settings. RGB on the other hand requires more tweaking for optimal results.

Conclusion

We have successfully implemented split-and-merge segmentation algorithm and tested in on variety of images with different features and characteristics. We conclude that our implemented method is fast and reliable.

Posted on

Object segmentation

This example shows how to segment objects using OpenCV and Kinect for XBOX 360. The depth map retrieved from Kinect sensor is aligned with color image and used to create segmentation mask.

Functions used: convertTo, floodFill, inRange, copyTo

Inputs

The color image
The depth map

The process

  1. Retrieve color image and depth map
  2. Compute coordinates of depth map pixels so they fit to color image
  3. Align depth map with color image
    cv::Mat depth32F;
    depth16U.convertTo(depth32F, CV_32FC1);
    cv::inRange(depth32F, cv::Scalar(1.0f), cv::Scalar(1200.0f), mask);
    
  4. Find seed point in aligned depth map
  5. Perform flood fill operation from seed point
    cv::Mat mask(cv::Size(colorImageWidth + 2, colorImageHeight + 2), CV_8UC1, cv::Scalar(0));
    floodFill(depth32F, mask, seed, cv::Scalar(0.0f), NULL, cv::Scalar(20.0f), cv::Scalar(20.0f), cv::FLOODFILL_MASK_ONLY);
  6. Make a copy of color image using mask
    cv::Mat color(cv::Size(colorImageWidth, colorImageHeight), CV_8UC4, (void *) colorImageFrame->pFrameTexture, colorImageWidth * 4);
    color.copyTo(colorSegment, mask(cv::Rect(1, 1, colorImageWidth, colorImageHeight)));
    

Sample

The depth map aligned with color image
Finding the seed point
The mask – result of the flood fill operation

Result

The result of segmentation process
Posted on

Presenting historical changes of building

The goal of this project is to implement algorithm that extract similar points or whole regions from two different images of the same building using OpenCV library and especially MSER algorithm (Maximally stable extremal regions). Images of building are taken in different time and have different hue, saturation, light and other conditions.

Based on the extracted regions, algorithm finds the same centers of key regions and merged images by these points to create a complete images of building with the presentation of its historical changes.

Functions used: MSER, fitEllipse, adaptiveThreshold, Canny, findContours

Input

The process

  1. Preprocessing
  2. MSER regions detection
    MSER mser(int _delta, int _min_area, int _max_area, float _max_variation, float _min_diversity, int _max_evolution, double _area_threshold, double _min_margin, int _edge_blur_size);
    

    MSER algorithm with different parameters
  3. Fitting detected regions by ellipse
    const vector<Point>& r;
    RotatedRect box = fitEllipse(r);
    
  4. Finding similar regions
  5. Merging images based on found regions

Practical Application

Interactive presentation of the historical buildings and visualising their changes in time.

Posted on

Hair segmentation

Description

The project shows hair segmentation from photos. Important thing is to have an appropriate input image, where background and hair color must be different. Algorithm uses Mean Shift segmentation to segment input image into regions. Best regions are selected to be in the final image.

OpenCV function used

cvHaarDetectObjects, cvSplit, cvCalcHist, cvGetMinMaxHistValue, cvInRangeS

The process

  1. appropriate image selection
  2. face detection
  3. face segmentation (optional)
  4. Mean Shift image segmentation
  5. first region selection
  6. expansion of the first region
  7. final hair segmentation from image

Sample

Input image
From left: haar face detection, colors reduction, mean shift segmentation.
Initial hair region selection.
Left: neighboring regions, right: adding neighboring regions.
Left: mask created, right: Hair segmentation final image.