Saliency is a measure of how strong elements stand out from their surrounding. Highly salient objects are likely to attract an observer’s attention. These regions are usually viewed during the first fixations, within very few seconds or even milliseconds. Are you interested in what people will look at in your image/website/advertisement?
Give it a try:
or use your own image:
Residual filter length:
There are several ways to computationally calculate saliency and all of them somehow highlight large color and intensity contrasts in the image. Depending on which method you prefer, the calculation is inspired by the human retina and visual processing pipeline, or just a plain piece of math (less physilogically meaningful but not necessarily less accurate when it comes to predicting observer’s gaze).
While image based saliency is an indicator of where attention will be directed to, it is entirely a bottom-up approach. That means that no knowledge of the viewer is included in the model. However, we humans have developed clever viewing strategies to tackle certain tasks, such as viewing web pages. To get a depper understanding of where we actually look at – and which cognitive associations are involved – there is currently no way around an eye-tracking study.
All data processing in this application is client-sided, meaning that neither your image nor the generated saliency map is transferred to any computer but yours.
Some remarks: The above saliency map computation works on gray-scale images. Even if you provide a color image, the implementation will convert it to grayscale. Theoretically, one could (and should) combine multiple saliency maps computed on different color spaces to get better results.
Map colors based on www.ColorBrewer.org, by Cynthia A. Brewer, Penn State.