Segmentation of anatomical organs in medical data – Master’s Thesis : Bc. Martin Tamajka

Download: Master’s Thesis – Bc. Martin Tamajka: Segmentation of anatomical organs in medical data

 

Annotation:

2016, May
Medical image segmentation is an important part of medical practice. Primarily as far as radiologists are concerned it simplifies their everyday tasks and allows them to use their time more effective, because in most cases radiologists only have a certain amount of time they can spend examining patient’s data. Computer aided diagnosis is also a powerful instrument in elimination of possible human failure.
In this work, we propose a novel approach to human organs segmentation. We primarily concentrate on segmentation of human brain from MR volume. Our method is based on oversegmenting 3D volume to supervoxels using SLIC algorithm. Individual supervoxels are described by features based on intensity distribution of contained voxels and on position within the brain. Supervoxels are classified by neural networks which are trained to classify supervoxels to individual tissues. In order to give our method additional precision, we use information about the shape and inner structure of the organ. In general we propose a 6-step segmentation method based on classification.
We compared our results with those of state-of-the-art methods and we can conclude that the results are clearly comparable.
Apart from the global focus of this thesis, our goal is to apply engineering skills and best practices to implement proposed method and necessary tools in such a way that they can be easily extended and maintained in the future.

Exploring Visual Saliency of Real Objects at Different Depths

Miroslav Laco, Patrik Polatsek, Wanda Benesova

Abstract. Depth cues are important aspects which influence the visual saliency of objects around us. However, the depth aspect and its quantified impact on the visual saliency has not yet been thoroughly examined in real environments. We designed and carried out an experimental study to examine the influence of the depth cues on the visual saliency of the objects at the scene.The experimental study took place with 28 participants under laboratory conditions with the objects in various depth configurations at the real scene. Visual attention data were measured by the wearable eye-tracking glasses. We evaluated the fixation data and their relation to the relative objects distances. Contrary to previous research, our results revealed a significantly higher frequency of the gaze fixations on objects with higher relative depths. Moreover, we observed and evaluated that the objects which ”pop-out” among others in the depth channel tend to significantly attract the observer’s attention.

depthSal dataset contains fixation data from eye-tracking experiments with pictorial depth and real stereoscopic depth in a natural environment.

download: depthSal.zip

Please cite this paper if you use the dataset:

Laco, M., Polatsek, P., & Benesova, W. (2019)

Exploring Visual Saliency of Real Objects at Different Depths

Color Saliency

Patrik Polatsek, Daniel Nechala

Color is the fundamental component of visual attention. Saliency is usually associated with color contrasts. Beside this bottom-up perspective, some recent works indicate that psychological aspects should be considered too. However, relatively little research has been done on potential impacts of color psychology on attention. To our best knowledge, a publicly available fixation dataset specialized on color feature does not exist. We therefore conducted a novel eye-tracking experiment with color stimuli. We studied  fixations of 15 participants to find out whether color differences can reliably model color saliency or particular colors are preferably fixated regardless of scene content, i.e. color prior.

Our experiment confirms that saliency from color contrasts play an important role in attention. An unexpected observation was that the LAB color space could not equally estimate perceived color differences of all participants. Therefore, there are presumably other, memory-related factors, that color vision employs. However, we did not found a significant preference in fixating danger-related colors regardless of distractors. While there was only a negligible dominance for red and yellow, the experiment surprisingly showed significantly less fixations of cyan. Future experiments should therefore use more colors, more participants and other color spaces for a deeper investigation of the color perception individuality and psychological functioning.

colorSal dataset contains images and fixation data from this eye-tracking experiment.

download: colorSal.zip

Please cite this work if you use the dataset:

Polatsek, P. (2019)

Modelling of Human Visual Attention

Dissertation thesis

Effects of individual’s emotions on saliency and visual search

Patrik Polatsek, Miroslav Laco, Šimon Dekrét, Wanda Benesova, Martina Baránková, Bronislava Strnádelová, Jana Koróniová, Mária Gablíková

Abstract.

While psychological studies have confirmed a connection between emotional stimuli and visual attention, there is a lack of evidence, how much influence individual’s mood has on visual information processing of emotionally neutral stimuli. In contrast to prior studies, we explored if bottom-up low-level saliency could be affected by positive mood. We therefore induced positive or neutral emotions in 10 subjects using autobiographical memories during free-viewing, memorizing the image content and three visual search tasks. We explored differences in human gaze behavior between both emotions and relate their fixations with bottom-up saliency predicted by a traditional computational model. We observed that positive emotions produce a stronger saliency effect only during free exploration of valence-neutral stimuli. However, the opposite effect was observed during task-based analysis. We also found that tasks could be solved less efficiently when experiencing a positive mood and therefore, we suggest that it rather distracts users from a task.

download: saliency-emotions

Please cite this paper if you use the dataset:

Polatsek, P., Laco, M., Dekrét, Š., Benesova, W., Baránková, M., Strnádelová, B., Koróniová, J., & Gablíková, M. (2019)

Effects of individual’s emotions on saliency and visual search

Computational Models of Shape Saliency

Patrik Polatsek, Marek Jakab, Wanda Benesova, Matej Kužma

Abstract. Computational models predicting stimulus-driven human visual attention usually incorporate simple visual features, such as intensity, color and orientation. However, saliency of shapes and their contour segments influence attention too. Therefore, we built 30 own shape saliency models based on existing shape representation and matching techniques and compared them with 5 existing saliency methods. Since available fixation datasets were usually recorded on natural scenes where various factors of attention are present, we performed a novel eye-tracking experiment that primarily focuses on shape and contour saliency. Fixations from 47 participants who looked at silhouettes of abstract and realworld objects were used to evaluate the accuracy of proposed saliency models and investigate which shape properties are most attentive. The results showed that visual attention integrates local contour saliency, saliency of global shape features and shape dissimilarities. Fixation data also showed that intensity and orientation contrasts play an important role in shape perception. We found that humans tend to fixate first irregular geometrical shapes and objects whose similarity to a circle is different from other objects.

shapeSal dataset contains an extended version of this eye-tracking experiment including images and fixation data (73 participants, 158 scenes).

download:  shapeSal.zip [V2.0; update: 25.3.2019]

Please cite this paper if you use the dataset:

Polatsek, P., Jakab, M., Benesova, W., & Kužma, M. (2019)
Computational Models of Shape Saliency
11th International Conference on Machine Vision (ICMV 2018) (Vol. 11041)
International Society for Optics and Photonics

https://doi.org/10.1117/12.2522779

Exploring Visual Attention and Saliency Modeling for Task-Based Visual Analysis

Patrik Polatsek, Manuela Waldner, Ivan Viola, Peter Kapec, Wanda Benesova

Abstract. Memory, visual attention and perception play a critical role in the design of visualizations. The way users observe a visualization is affected by salient stimuli in a scene as well as by domain knowledge, interest, and the task. While recent saliency models manage to predict the users’ visual attention in visualizations during exploratory analysis, there is little evidence how much influence bottom-up saliency has on task-based visual analysis. Therefore, we performed an eye-tracking study with 47 users to determine the users’ path of attention when solving three low-level analytical tasks using 30 different charts from the MASSVIS database. We also compared our task-based eye tracking data to the data from the original memorability experiment by Borkin et al.. We found that solving a task leads to more consistent viewing patterns compared to exploratory visual analysis. However, bottom-up saliency of a visualization has negligible influence on users’ fixations and task efficiency when performing a low-level analytical task. Also, the efficiency of visual search for an extreme target data point is barely influenced by the target’s bottom-up saliency. Therefore, we conclude that bottom-up saliency models tailored towards information visualization are not suitable for predicting visual attention when performing task-based visual analysis. We discuss potential reasons and suggest extensions to visual attention models to better account for task-based visual analysis. 

TASKVIS dataset contains eye-tracking data from this task-based visual analysis experiment.

download:  taskvis.zip

Please cite this paper if you use the dataset:

Polatsek, P., Waldner, M., Viola, I., Kapec, P., & Benesova, W. (2018)
Exploring Visual Attention and Saliency Modeling for Task-Based Visual Analysis
Computers & Graphics, 72, 26-38

https://doi.org/10.1016/j.cag.2018.01.010

Automatic brain segmentation method based on supervoxels

Martin Tamajka, Wanda Benesova

Abstract:

In this work, we present a fully automatic brain segmentation method based on supervoxels (ABSOS). We propose novel features used for classification, that are based on distance and angle in different planes between supervoxel and brain center. These novel features are combined with other prominent features. The presented method is based on machine learning and incorporates also a skull stripping (cranium removing) in the preprocessing step. Neural network – multilayer perceptron (MLP) was trained for the classification process. In this paper we also present thorough analysis, which supports choice of rather small supervoxels, preferring homogeneity over compactness, and value of intensity threshold parameter used in preprocessing for skull stripping. In order to decrease computational complexity and increase segmentation performance we incorporate prior knowledge of typical background intensities acquired in analysis of subjects.

Published in:

2016 International Conference on Systems, Signals and Image Processing (IWSSIP)

Date of Conference:

23-25 May 2016

 

Egocentric RGB-D dataset (eye-tracker + Kinect v2)

pdf: Visual attention in egocentric field-of-view using RGB-D data

[1] V. Olesova, W. Benesova, and P. Polatsek, “Visual Attention in Egocentric Field-of-view using RGB-D Data .,” in Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 2016.

You are free to use this dataset for any purpose. If you use this datasetplease cite the paper above.

Download:    fiit-dataset (RGB-D Gaze videos 2GB)

Segmentation of Brain Tumors from MRI using Adaptive Thresholding and Graph Cut Algorithm

Development of methods for automatic brain tumor segmentation remains one of the most challenging tasks in processing of medical data. Exact segmentation could improve the diagnostics, as for example the time evaluation of the tumor volume. However, manual segmentation in magnetic resonance data is a time-consuming task. We present a method of automatic tumor segmentation in magnetic resonance images which consists of several steps. In the first step high intense cranium is removed. In the next step parameters of the image are derived using the method “Mixture of Gaussians”. These parameters control the morphological reconstruction (proposed by Luc Vincent 1993). The morphological reconstruction produces binary mask which is used in the last step of the segmentation: graph cut segmentation. First results of this method are presented in this paper.

Source code

 

Bottom-up saliency model generation using superpixels

Bottom-up saliency model generation using superpixels

Patrik Polatsek, Wanda Benesova
Slovenska Technicka Univ. (Slovakia)

Abstract. Prediction of human visual attention is more and more frequently applicable in computer graphics, image processing, humancomputer interaction and computer vision. Human attention is influenced by various bottom-up stimuli such as colour, intensity and orientation as well as top-down stimuli related to our memory. Saliency models implement bottom-up factors of visual attention and represent the conspicuousness of a given environment using a saliency map. In general, visual attention processing consists of identification of individual features and their subsequent combination to perceive whole objects. Standard hierarchical saliency methods do not respect the shape of objects and model the saliency as the pixel-by-pixel difference between the centre and its surround.
The aim of our work is to improve the saliency prediction using a superpixel-based approach whose regions should correspond to objects borders. In this paper we propose a novel saliency method that combines a hierarchical processing of visual features and a superpixel-based segmentation. The proposed method is compared with existing saliency models and evaluated on a publicly available dataset.


Paper will be available in 2015: 
P. Polatsek and W. Benesova, “Bottom-up saliency model generation using superpixels,” in Proceedings of the Spring Conference on Computer Graphics 2015.

Accelerated gSLIC for Superpixel Generation used in Object Segmentation

Accelerated gSLIC for Superpixel Generation used in Object Segmentation

Robert Birkus

Abstract. The goal of our work is to create a robust object segmentation method which is based on superpixels and will be able to run in real-time applications.

The SLIC algorithm proposed by Achanta et al. [1] is a superpixel segmentation algorithm based on k-means clustering, which efficiently generates superpixels. It seems to be a good trade-off between the time consumption and robustness. Important advancement towards the real time applications using superpixels has been proposed by the authors of the gSLIC – a modified SLIC implementation on the GPU (Graphics Processing Unit) [2].

In this paper, we present a significant acceleration of this superpixel segmentation algorithm gSLIC implemented for the GPU. A different strategy of the implementation on the GPU speeds up the calculation time twice and more over the presented GPU implementation. This implementation can work in real-time even for high resolution images. We also present our method for merging of similar superpixels. This method uses an adaptive decision procedure for merging of superpixels. Accelerated gSLIC is the first part of this proposed object segmentation method.

References

[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. Slic superpixels. Technical report, Ecole Polytechnique Fedralede Lausanne , Report No. EPFL-REPORT-149300, 2010.
[2] C. Y. Ren and I. Reid. gSLIC: a real-time implementation of SLIC superpixel segmentation. Technical report, University of Oxford, Department of Engineering, Technical Report (2011)., 2011.


Paper is avaible at CESCG proceedings:
http://www.cescg.org/CESCG-2015/papers/Birkus-Accelerated_gSLIC_for_Superpixel_Generation_used_in_Object_Segmentation.pdf

Source code:

Solution (Visual Studio 2012, V11):
https://bitbucket.org/Birky/accelerated-gslic-for-superpixel-generation/src