MODELLING OF HUMAN VISUAL ATTENTION (Ing. Patrik Polatsek – Dissertation thesis)

Slovak University of Technology Bratislava, FACULTY OF INFORMATICS AND INFORMATION TECHNOLOGIES

MODELLING OF HUMAN VISUAL ATTENTION

Degree Course: Applied Informatics
Author: Ing. Patrik Polatsek
Supervisor: doc. Ing. Vanda Benešová, PhD.
May 2019
In recent decades, visual attention modelling became a prominent research area. To simulate human attention, a computational model has to incorporate various attention mechanisms.
In this thesis we explored how low- and mid-level features such as color, motion, depth and shape influence visual attention in our own eye-tracking experiments. To measure these effects, we utilized various state-of-the-art as well as novel computational models which estimate saliency of a specific feature.
To deeper understand the process of selective attention in everyday actions, we conducted several experiments in real environments recorded from the first-person perspective. Our results showed that egocentric attention is very individual and differs from 2D image viewing conditions, partially due to binocular cues that enhance viewer’s perception. We therefore suggest to employ specialized saliency models for egocentric vision. Finally, we found out that high-level factors such as individual’s emotions and task-based analysis of visualizations influence human gaze behavior too.

pdf Autoreferat

pdf DissertationThesis-Polatsek

Exploring Visual Saliency of Real Objects at Different Depths

Miroslav Laco, Patrik Polatsek, Wanda Benesova

Abstract. Depth cues are important aspects which influence the visual saliency of objects around us. However, the depth aspect and its quantified impact on the visual saliency has not yet been thoroughly examined in real environments. We designed and carried out an experimental study to examine the influence of the depth cues on the visual saliency of the objects at the scene.The experimental study took place with 28 participants under laboratory conditions with the objects in various depth configurations at the real scene. Visual attention data were measured by the wearable eye-tracking glasses. We evaluated the fixation data and their relation to the relative objects distances. Contrary to previous research, our results revealed a significantly higher frequency of the gaze fixations on objects with higher relative depths. Moreover, we observed and evaluated that the objects which ”pop-out” among others in the depth channel tend to significantly attract the observer’s attention.

depthSal dataset contains fixation data from eye-tracking experiments with pictorial depth and real stereoscopic depth in a natural environment.

download: depthSal.zip

Please cite this paper if you use the dataset:

Laco, M., Polatsek, P., & Benesova, W. (2019)

Exploring Visual Saliency of Real Objects at Different Depths

Color Saliency

Patrik Polatsek, Daniel Nechala

Color is the fundamental component of visual attention. Saliency is usually associated with color contrasts. Beside this bottom-up perspective, some recent works indicate that psychological aspects should be considered too. However, relatively little research has been done on potential impacts of color psychology on attention. To our best knowledge, a publicly available fixation dataset specialized on color feature does not exist. We therefore conducted a novel eye-tracking experiment with color stimuli. We studied  fixations of 15 participants to find out whether color differences can reliably model color saliency or particular colors are preferably fixated regardless of scene content, i.e. color prior.

Our experiment confirms that saliency from color contrasts play an important role in attention. An unexpected observation was that the LAB color space could not equally estimate perceived color differences of all participants. Therefore, there are presumably other, memory-related factors, that color vision employs. However, we did not found a significant preference in fixating danger-related colors regardless of distractors. While there was only a negligible dominance for red and yellow, the experiment surprisingly showed significantly less fixations of cyan. Future experiments should therefore use more colors, more participants and other color spaces for a deeper investigation of the color perception individuality and psychological functioning.

colorSal dataset contains images and fixation data from this eye-tracking experiment.

download: colorSal.zip

Please cite this work if you use the dataset:

Polatsek, P. (2019)

Modelling of Human Visual Attention

Dissertation thesis

Effects of individual’s emotions on saliency and visual search

Patrik Polatsek, Miroslav Laco, Šimon Dekrét, Wanda Benesova, Martina Baránková, Bronislava Strnádelová, Jana Koróniová, Mária Gablíková

Abstract.

While psychological studies have confirmed a connection between emotional stimuli and visual attention, there is a lack of evidence, how much influence individual’s mood has on visual information processing of emotionally neutral stimuli. In contrast to prior studies, we explored if bottom-up low-level saliency could be affected by positive mood. We therefore induced positive or neutral emotions in 10 subjects using autobiographical memories during free-viewing, memorizing the image content and three visual search tasks. We explored differences in human gaze behavior between both emotions and relate their fixations with bottom-up saliency predicted by a traditional computational model. We observed that positive emotions produce a stronger saliency effect only during free exploration of valence-neutral stimuli. However, the opposite effect was observed during task-based analysis. We also found that tasks could be solved less efficiently when experiencing a positive mood and therefore, we suggest that it rather distracts users from a task.

download: saliency-emotions

Please cite this paper if you use the dataset:

Polatsek, P., Laco, M., Dekrét, Š., Benesova, W., Baránková, M., Strnádelová, B., Koróniová, J., & Gablíková, M. (2019)

Effects of individual’s emotions on saliency and visual search

Saliency map

Patrik Polatsek

Introduction

Saliency model predicts what attracts the attention. The results of such models are saliency maps. A saliency map is a topographic representation of saliency which refers to visually dominant locations.

The aim of the project is to implement Itti’s saliency model. It is a hierarchical biologically inspired bottom-up model based on three features: intensity, color and orientation. The resulting saliency model is created by hierarchical decomposition of the features and their combination to the single map. Attended locations are searched using Winner-take-all neuron network.

The process

First, the features are extracted from an input image.

Intensity is obtained by converting the image to grayscale.

cvtColor( input, intensity, CV_BGR2GRAY );

For color extraction the image is converted to red-green-blue-yellow color space.

R = bgr[2] - ( bgr[1] + bgr[0] ) / 2;
G = bgr[1] - ( bgr[2] + bgr[0] ) / 2;
B = bgr[0] - ( bgr[2] + bgr[1] ) / 2;
Y = ( bgr[2] + bgr[1] ) / 2 - abs( bgr[2] - bgr[1] ) / 2 - bgr[0];

Information about local orientation is extracted using Gabor filter in four angles.

Mat kernel = getGaborKernel( Size(11, 11), 2.5, degreeToRadian(theta), 2.5, 0.5 );
filter2D( input, im, -1, kernel );

The next phase consists of creation of Gaussian pyramids.

buildPyramid( channel, pyramid, levels);

Center-surround organization of receptive field of ganglion neurons is implemented as difference-of-Gaussian between finer and coarser scales of a pyramid called a feature map.

for( int i : centerScale )
for (int i : centerScale)
{
	pyr_c = pyramid[i];
	for (int j : surroundScale)
	{
		Mat diff;
		resize(pyramid[i + j], pyr_s, pyr_c.size());
		absdiff(pyr_c, pyr_s, diff);
		differencies.push_back(diff);
	}
}

The model creates three conspicuous maps for intensity, color and orientation combining created feature maps.

The final saliency map is a mean of the conspicuous maps.

Mat saliencyMap = maps[0] / maps.size() + maps[1] / maps.size() + maps[2] / maps.size();
Saliency
Basic structure of saliency model