MIT Saliency Benchmark Results

Which model of saliency best predicts where people look?

Many computational models of visual attention have been created from a wide variety of different approaches to predict where people look in images. Each model is usually introduced by demonstrating performances on new images, and it is hard to make immediate comparisons between models. To alleviate this problem, we propose a benchmark data set containing 300 natural images with eye tracking data from 39 observers to compare model performances. This is the largest data set with so many viewers per image. We calculate the performance of many models at predicting ground truth fixations using multiple metrics. We post the results here and provide a way for people to submit new models for evaluation.

publications

This benchmark is released in conjunction to the paper "A Benchmark of Computational Models of Saliency to Predict Human Fixations" by Tilke Judd, Fredo Durand and Antonio Torralba, available as a Jan 2012 MIT tech report.

@InProceedings{Judd_2012,
  author    = {Tilke Judd and Fr{\'e}do Durand and Antonio Torralba},
  title     = {A Benchmark of Computational Models of Saliency to Predict Human Fixations},
  booktitle = {MIT Technical Report},
  year      = {2012}
}

Images

300 benchmark images (The fixations from 39 viewers per image are not public such that no model can be trained using this data set.)

Model Performances

35 models, 5 baselines, 7 metrics, and counting...

Performance numbers prior to September 25, 2014.

Matlab code for the metrics we use.

Sorted by: metric

Model Name Published Code AUC-1 [?] SIM [?] EMD [?] AUC-2 [?] sAUC [?] CC [?] NSS [?] Date tested [key] Sample [img]
Baseline: infinite humans [?] 0.91 1 0 0.87 0.80 1 3.18
Deep Gaze 1 Matthias Kümmerer, Lucas Theis, Matthias Bethge (paper coming soon) 0.84 0.39 4.97 0.83 0.66 0.48 1.22 first tested: 02/10/2014
last tested: 22/10/2014
maps from authors
Boolean Map based Saliency (BMS) Jianming Zhang, Stan Sclaroff. Saliency detection: a boolean map approach [ICCV 2013] matlab, executable 0.83 0.51 3.35 0.82 0.65 0.55 1.41 first tested: 14/05/2014
last tested: 23/09/2014
maps from authors
Mixture of Saliency Models Xuehua Han, Shunji Satoh. "Unifying computational models for visual attention" [AINI 2014, Sep. (accepted)] 0.82 0.44 4.22 0.81 0.62 0.52 1.34 first tested: 08/08/2014
last tested: 23/09/2014
maps from authors
Ensembles of Deep Networks (eDN) Eleonora Vig, Michael Dorr, David Cox. Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images [CVPR 2014] python 0.82 0.41 4.56 0.81 0.62 0.45 1.14 first tested: 16/08/2014
last tested: 23/09/2014
maps from authors
Outlier Saliency (OS) Chuanbo Chen, He Tang, Zehua Lyu, Hu Liang, Jun Shang, Mudar Serem. Saliency Modeling via Outlier Detection [Journal of Electronic Imaging]. Accepted, 2014. 0.82 0.50 3.33 0.81 0.62 0.54 1.38 first tested: 16/09/2014
last tested: 23/09/2014
maps from authors
Judd Model Tilke Judd, Krista Ehinger, Fredo Durand, Antonio Torralba. Learning to predict where humans look [ICCV 2009] matlab 0.81 0.42 4.45 0.80 0.60 0.47 1.18 last tested: 23/09/2014
maps from code (DL:17/12/2013) with default params
CovSal Erkut Erdem, Aykut Erdem. Visual saliency estimation by nonlinearly integrating features using region covariances [JoV 2013] matlab 0.81 0.47 3.39 0.67 0.57 0.45 1.22 first tested: 05/02/2012
last tested: 23/09/2014
maps from authors
Fast and Efficient Saliency (FES) Hamed Rezazadegan Tavakoli, Esa Rahtu, Janne Heikkila. Fast and efficient saliency detection using sparse sampling and kernel density estimation [SCIA 2011] matlab 0.80 0.49 3.36 0.73 0.59 0.48 1.27 first tested: 10/04/2013
last tested: 23/09/2014
maps from authors
Graph-Based Visual Saliency (GBVS) Jonathan Harel, Christof Koch, Pietro Perona. Graph-Based Visual Saliency [NIPS 2006] matlab 0.81 0.48 3.51 0.80 0.63 0.48 1.24 last tested: 23/09/2014
maps from code (DL:20/08/2013) with default params
Spatially Weighted Dissimilarity Saliency (SWD) Lijuan Duan, Chunpeng Wu, Jun Miao, Laiyun Qing, Yu Fu. Visual Saliency Detection by Spatially Weighted Dissimilarity [CVPR 2011] matlab 0.81 0.46 3.89 0.80 0.59 0.49 1.27 first tested: 09/22/2014
last tested: 29/09/2014
maps from authors
Baseline: one human [?] 0.80
min: 0.76
max: 0.83
0.38
min: 0.33
max: 0.46
3.48
min: 2.88
max: 4.18
0.66
min: 0.63
max: 0.71
0.63
min: 0.60
max: 0.67
0.52
min: 0.38
max: 0.65
1.65
min: 1.21
max: 2.10
Sampled Template Collation Andreas Holzbach, Gordon Cheng. A Scalable and Efficient Method for Salient Region Detection using Sampled Template Collation [ICIP 2014] 0.79 0.39 4.79 0.78 0.54 0.40 0.97 first tested: 04/12/2013
last tested: 23/09/2014
maps from authors
Region Contrast (RC) Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Philip H. S. Torr, Shi-Min Hu. Global Contrast based Salient Region detection [IEEE TPAMI 2014]
Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Philip H. S. Torr, Shi-Min Hu. Salient Object Detection and Segmentation [CVPR 2011]
c++, executable 0.79 0.48 3.48 0.78 0.55 0.47 1.18 first tested: 08/03/2013
last tested: 23/09/2014
maps from authors
Multi-Resolution AIM (MR-AIM) Siddharth Advani, John Sustersic, Kevin Irick, Vijaykrishnan Narayanan. A multi-resolution saliency framework to drive foveation [ICASSP 2013] 0.77 0.43 4.04 0.76 0.55 0.39 0.96 first tested: 27/05/2013
last tested: 23/09/2014
maps from authors
CWS model Unpublished work 0.79 0.46 3.81 0.78 0.55 0.45 1.11 first tested: 14/05/2014
last tested: 23/09/2014
maps from authors
MKL-based model Yasin Kavak, Aykut Erdem, Erkut Erdem. Visual saliency estimation by integrating features using multiple kernel learning [ISACS 2013] 0.78 0.42 4.40 0.78 0.61 0.42 1.08 first tested: 17/03/2014
last tested: 23/09/2014
maps from authors
Baseline: Center [?] matlab 0.78 0.39 4.81 0.77 0.51 0.38 0.92
Saliency for Image Manipulation Ran Margolin, Lihi Zelnik-Manor, Ayellet Tal. Saliency for Image Manipulation [CGI 2012] matlab 0.77 0.46 4.17 0.76 0.64 0.43 1.14 first tested: 01/07/2012
last tested: 23/09/2014
maps from authors
RARE2012 Nicolas Riche, Matei Mancas, Matthieu Duvinage, Makiese Mibulumukini, Bernard Gosselin, Thierry Dutoit. RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis [Signal Processing: Image Communication, 2013] matlab 0.77 0.46 4.11 0.75 0.67 0.42 1.15 first tested: 31/08/2012
last tested: 23/09/2014
maps from authors
LMF Liang Jiayu Gary. Unpublished work. 0.77 0.45 4.22 0.76 0.64 0.41 1.07 first tested: 06/10/2013
last tested: 23/09/2014
maps from authors
AIM Neil Bruce, John Tsotsos. Attention based on information maximization [JoV 2007] matlab 0.77 0.40 4.73 0.75 0.66 0.31 0.79 last tested: 23/09/2014
maps from code (DL:15/01/2014) with params: resize=0.5, convolve=1, thebasis='31infomax975'
Random Center Surround Saliency Tadmeri Narayan Vikrama, Marko Tscherepanowa, Britta Wredea. A saliency map based on sampling an image into random rectangular regions of interest [Pattern Recognition 2012] matlab 0.75 0.44 3.81 0.74 0.55 0.38 0.95 first tested: 15/05/2012
last tested: 23/09/2014
maps from authors
Image Signature Xiaodi Hou, Jonathan Harel, Christof Koch. Image Signature: Highlighting Sparse Salient Regions [PAMI 2011] matlab 0.75 0.43 4.49 0.74 0.66 0.38 1.01 first tested: 19/06/2014
last tested: 23/09/2014
maps from authors
IttiKoch2 Implementation by Jonathan Harel (part of GBVS toolbox) matlab 0.75 0.44 4.26 0.74 0.63 0.37 0.97 last tested: 23/09/2014
maps from code (DL:20/08/2013) with default params
Visual Conspicuity (VICO) Matthieu Perreira Da Silva, Vincent Courboulay. Implementation and Evaluation of a Computational Model of Attention for Computer Vision [book chapter, 2012] binaries 0.75 0.44 4.38 0.71 0.60 0.37 0.97 first tested: 28/11/2012
last tested: 23/09/2014
maps from authors
Aboudib Magnification Saliency (Bottom-up v1) Ala Aboudib, Vincent Gripon, Gilles Coppin. Unpublished work. 0.74 0.44 4.24 0.72 0.58 0.39 0.99 first tested: 23/09/2014
last tested: 29/09/2014
maps from authors
Context-Aware saliency Stas Goferman, Lihi Zelnik-Manor, Ayellet Tal. Context-Aware Saliency Detection [CVPR 2010] [PAMI 2012] matlab 0.74 0.43 4.46 0.73 0.65 0.36 0.95 last tested: 23/09/2014
maps from code (DL:15/01/2014) with default params
Adaptive Whitening Saliency Model (AWS) Anton Garcia-Diaz, Victor Leboran, Xose R. Fdez-Vidal, Xose M. Pardo. On the relationship between optical variability, visual saliency, and eye fixations: A computational approach [JoV 2012] matlab 0.74 0.43 4.62 0.73 0.68 0.37 1.01 last tested: 23/09/2014
maps from code (DL:17/01/2014) with params: rescale=0.5
Weighted Maximum Phase Alignment Model (WMAP) Fernando Lopez-Garcia, Xose Ramon Fdez-Vidal, Xose Manuel Pardo, Raquel Dosil. Scene Recognition through Visual Attention and Image Features: A Comparison between SIFT and SURF Approaches matlab 0.74 0.42 4.49 0.67 0.63 0.34 0.97 last tested: 23/09/2014
maps from code (DL:17/01/2014) with params: rescale=0.5
NARFI saliency Jiazhong Chen, Hua Cao, Zengwei Ju, Leihua Qin, Shuguang Su. Non-attention region first initialisation of k-means clustering for saliency detection [Electronics Letters 2013] 0.73 0.38 4.75 0.61 0.55 0.33 0.83 first tested: 05/11/2013
last tested: 23/09/2014
maps from authors
Self-resemblance by LARK Hae Jong Seo, Peyman Milanfar. Static and Space-time Visual Saliency Detection by Self-Resemblance [JoV 2012] matlab 0.71 0.41 4.55 0.69 0.64 0.31 0.83 first tested: 20/06/2014
last tested: 23/09/2014
maps from authors
Murray model (Chromatic Induction Wavelet Model) Naila Murray, Maria Vanrell, Xavier Otazu, C. Alejandro Parraga. Saliency Estimation Using a Non-Parametric Low-Level Vision Model [CVPR 2011] matlab 0.70 0.38 5.18 0.69 0.65 0.27 0.73 last tested: 23/09/2014
maps from code (DL:29/05/2014) with default params
Quantum-Cuts (QCUT) Caglar Aytekin, Serkan Kiranyaz, Moncef Gabbouj 0.69 0.39 4.86 0.64 0.55 0.27 0.71 first tested: 19/12/2013
last tested: 23/09/2014
maps from authors
Torralba saliency Antonio Torralba, Aude Oliva, Monica S. Castelhano, John M. Henderson. Contextual Guidance of Attention in Natural scenes: The role of Global features on object search [Psychological Review 2006] matlab 0.68 0.39 4.99 0.68 0.62 0.25 0.69 last tested: 23/09/2014
maps from code (here) with default params
Baseline: Permutation Control [?] 0.68 0.33 4.73 0.59 0.50 0.20 0.49
SUN saliency Lingyun Zhang, Matthew H. Tong, Tim K. Marks, Honghao Shan, Garrison W. Cottrell. SUN: A Bayesian framework for saliency using natural statistics [JoV 2008] matlab 0.67 0.38 5.10 0.66 0.61 0.25 0.68 last tested: 23/09/2014
maps from code (DL:15/01/2014) with params: scale=0.5
IttiKoch Implemented in the Saliency Toolbox by: Dirk Walther, Christof Koch. Modeling attention to salient proto-objects [Neural Networks 2006] matlab 0.60 0.20 5.17 0.54 0.53 0.14 0.43 last tested: 23/09/2014
maps from code (DL:15/01/2014) with params: sampleFactor='dyadic'
Achanta Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, Sabine Susstrunk. Frequency-tuned Salient Region Detection [CVPR 2009] matlab, c++, executable 0.52 0.29 5.77 0.52 0.52 0.04 0.13 last tested: 23/09/2014
maps from code (DL:15/01/2014) with params: GausParam=[3,3]
Baseline: Chance [?] matlab 0.50 0.31 5.73 0.50 0.50 0.00 0.00

Submit a new model

Instructions:
1) Download our 300 images (IMAGES.zip)
2) Run your model to create saliency maps of each image. The saliency maps should be .jpg images of the same size and name as the original images.
3) Submit your maps to saliency@mit.edu (as a zip or tar folder).
4) We run the scoring metrics to compare how well your saliency maps predict where 39 observers looked on the images. Because we do not make the fixations public (to avoid any model from being trained on the data), it is not possible to score the model on your own. For reference, you can see the Matlab code we use to score models.
5) We post your score and model details on this page.
6) Let us know if you have a publication, website, or publicly available code for your model that we can link to your score in the chart above.

Optimize your model

Models that have some blur and center bias, and those that are histogram-matched to the distribution of human fixations (i.e. not too dense or spare) tend to obtain better performances according to our evaluation protocol. To optimize these parameters, you can use the MIT ICCV dataset, which comes with ground truth human fixations. We provide some sample optimization code to select parameters for a subset of this dataset. The evaluation code is the same as the one used for the MIT Saliency Benchmark. The ICCV data set is an appropriate testing choice because it closely matches the eyetracking set-up and data collection procedure that was used for the MIT Saliency Benchmark (same eyetracker, screen size, distance to viewer, image presentation times, image types, etc.).

Saliency datasets

[If you have another fixation data set that you would like to list here, email saliency@mit.edu]

Dataset Citation Images Observers Tasks Durations Extra Notes
MIT Saliency Benchmark (this page) Tilke Judd, Fredo Durand, Antonio Torralba. A Benchmark of Computational Models of Saliency to Predict Human Fixations [MIT tech report 2012] 300 natural indoor and outdoor scenes
size: max dim: 1024px, other dim: 457-1024px
1 dva* ~ 35px
39
ages: 18-50
free viewing 3 sec This is the only data set with held-out human eye movements, and is used as a benchmark test set.
eyetracker: ETL 400 ISCAN (240Hz)
MIT data set Tilke Judd, Krista Ehinger, Fredo Durand, Antonio Torralba. Learning to Predict where Humans Look [ICCV 2009] 1003 natural indoor and outdoor scenes
size: max dim: 1024px, other dim: 405-1024px
1 dva ~ 35px
15
ages: 18-35
free viewing 3 sec Includes: 779 landscape images and 228 portrait images. Can be used as training data for MIT benchmark.
eyetracker: ETL 400 ISCAN (240Hz)
Eye Fixations in Crowd (EyeCrowd) data set Ming Jiang, Juan Xu, Qi Zhao. Saliency in Crowd [ECCV 2014] 500 natural indoor and outdoor images with varying crowd densities
size: 1024x768px
1 dva ~ 26px
16
ages: 20-30
free viewing 5 sec The images have a diverse range of crowd densities (up to 268 faces per image). Annotations available: faces labelled with rectangles; two annotations of pose and partial occlusion on each face.
eyetracker: Eyelink 1000 (1000Hz)
Fixations in Webpage Images (FiWI) data set Chengyao Shen, Qi Zhao. Webpage Saliency [ECCV 2014] 149 webpage screenshots from in 3 categories.
size: 1360x768px
1 dva ~ 26px
11
ages: 21-25
free viewing 5 sec Text: 50, Pictorial: 50, Mixed:49
eyetracker: Eyelink 1000 (1000Hz)
VIU data set Kathryn Koehler, Fei Guo, Sheng Zhang, Miguel P. Eckstein. What Do Saliency Models Predict? [JoV 2014] 800 natural indoor and outdoor scenes
size: max dim: 405px
1 dva ~ 27px
100,22,20,38
ages: 18-23
explicit saliency judgement, free viewing, saliency search, cued object search until response, 2 sec, 2 sec, 2 sec eyetracker: Eyelink 1000 (250Hz)
Object and Semantic Images and Eye-tracking (OSIE) data set Juan Xu, Ming Jiang, Shuo Wang, Mohan Kankanhalli, Qi Zhao. Predicting Human Gaze Beyond Pixels [JoV 2014] 700 natural indoor and outdoor scenes, aesthetic photographs from Flickr and Google
size: 800x600px
1 dva ~ 24px
15
ages: 18-30
free viewing 3 sec A large portion of images have multiple dominant objects in the same image. Annotations available: 5,551 segmented objects with fine contours; annotations of 12 semantic attributes on each of the 5,551 objects
eyetracker: Eyelink 1000 (2000Hz)
VIP data set Keng-Teck Ma, Terence Sim, Mohan Kankanhalli. A Unifying Framework for Computational Eye-Gaze Research [Workshop on Human Behavior Understanding 2013] 150 neutral and affective images, randomly chosen from NUSEF dataset
75
ages: undergrads, postgrads, working adults
free viewing, anomaly detection 5 sec Annotations available: demographic and personality traits of the viewers (can be used for training trait-specific saliency models)
eyetracker: SMI RED 250 (120Hz)
MIT Low-resolution data set Tilke Judd, Fredo Durand, Antonio Torralba. Fixations on Low-Resolution Images [JoV 2011] 168 natural and 25 pink noise images at 8 different resolutions
size: 860x1024px
1 dva ~ 35px
8 viewers per image, 64 in total
ages: 18-55
free viewing 3 sec eyetracker: ETL 400 ISCAN (240Hz)
KTH Koostra data set Gert Kootstra, Bart de Boer, Lambert R. B. Schomaker. Predicting Eye Fixations on Complex Visual Stimuli using Local Symmetry [Cognitive Computation 2011] 99 photographs from 5 categories.
size: 1024x768px
31
ages: 17-32
free viewing 5 sec Images by category: 19 images with symmetrical natural objects, 12 images of animals in a natural setting, 12 images of street scenes, 16 images of buildings, 40 images of natural environments.
eyetracker: Eyelink I
NUSEF data set Subramanian Ramanathan, Harish Katti, Nicu Sebe, Mohan Kankanhalli, Tat-Seng Chua. An eye fixation database for saliency detection in images [ECCV 2010] 758 everyday scenes from Flickr, aesthetic content from Photo.net, Google images, emotion-evoking IAPS pictures
size: 1024x728px
25 on average
ages: 18-35
free viewing 5 sec eyetracker: ASL
TUD Image Quality Database 2 H. Alers, H. Liu, J. Redi and I. Heynderickx. Studying the risks of optimizing the image quality in saliency regions at the expense of background content [SPIE 2010] 160 images (40 at 4 different levels of compression)
size: 600x600px
40, 20
ages: students
free viewing, quality assessment 8 sec eyetracker: iView X RED (50Hz)
Ehinger data set Krista Ehinger, Barbara Hidalgo-Sotelo, Antonio Torralba, Aude Oliva. Modeling search for people in 900 scenes [Visual Cognition 2009] 912 outdoor scenes
size: 800x600px
1 dva ~ 34px
14
ages: 18-40
search (person detection) until response eyetracker: ISCAN RK-464 (240Hz)
A Database of Visual Eye Movements (DOVES) Ian van der Linde, Umesh Rajashekar, Alan C. Bovik, Lawrence K. Cormack. DOVES: A database of visual eye movements [Spatial Vision 2009] 101 natural calibrated images
size: 1024x768px
29
ages: mean=27
free viewing 5 sec eyetracker: Fourward Tech. Gen. V (200Hz)
TUD Image Quality Database 1 H. Liu and I. Heynderickx. Studying the Added Value of Visual Attention in Objective Image Quality Metrics Based on Eye Movement Data [ICIP 2009] 29 images from the LIVE image quality database (varying dimensions) 20
ages: students
free viewing 10 sec eyetracker: iView X RED (50Hz)
Visual Attention for Image Quality (VAIQ) Database Ulrich Engelke, Anthony Maeder, Hans-Jurgen Zepernick. Visual Attention Modeling for Subjective Image Quality Databases [MMSP 2009] 42 images from 3 image quality databases: IRCCyN/IVC, MICT, and LIVE (varying dimensions) 15
ages: 20-60 (mean=42)
free viewing 12 sec eyetracker: EyeTech TM3
Toronto data set Neil Bruce, John K. Tsotsos. Attention based on information maximization [JoV 2007] 120 color images of outdoor and indoor scenes
size: 681x511px
20
ages: undergrads, grads
free viewing 4 sec A large portion of images here do not contain particular regions of interest.
eyetracker: ERICA workstation including a Hitachi CCD camera with an IR emitting LED
Fixations in Faces (FiFA) data base Moran Cerf, Jonathan Harel, Wolfgang Einhauser, Christof Koch. Predicting human gaze using low-level saliency combined with face detection [NIPS 2007] 200 color outdoor and indoor scenes
size: 1024x768px
1 dva ~ 34px
8 free viewing 2 sec Images include salient objects and many different types of faces. This data set was originally used to establish that human faces are very attractive to observers and to test models of saliency that included face detectors. Object annotations are available.
eyetracker: Eyelink 1000 (1000Hz)
Le Meur data set Olivier Le Meur, Patrick Le Callet, Dominique Barba, Dominique Thoreau. A coherent computational approach to model the bottom-up visual attention [PAMI 2006] 27 color images 40 free viewing 15 sec eyetracker: Cambridge Research

(*) dva = degree of visual angle
Matlab code for computing visual angle. This code has been written to help standardize and make this computation easier.
Why is this relevant to saliency modeling? A continuous fixation map is calculated by convolving locations of fixation with a Gaussian of a particular sigma. This sigma is most commonly set to be approximately 1 degree of visual angle, which is an estimate of the size of the fovea (Le Meur and Baccino, 2013). This gives us an upper bound of how well we can predict where humans look on the images in a particular dataset, and thus this should inform how we evaluate saliency models.


Other saliency-related data sets

IVC Data sets The Images and Video Communications team (IVC) of IRCCyN lab provides several image and video databases including eye movement recordings. Some of the databases are based on a free viewing task, other on a quality evaluation task.

Regional Saliency Dataset (RSD) [Li, Tian, Huang, Gao 2009] (paper) A dataset for evaluating visual saliency in video.

MSRA Salient Object Database [Liu et al. 2007] database of 20,000 images with hand labeled rectangles of principle salient object by 3 users.

Updates

22/10/2014
- added Deep Gaze 1
03/10/2014
- added Aboudib Magnification Saliency
29/09/2014
- added SWT saliency
- added baseline sample images
25/09/2014
- changed evaluation protocol: omit histogram matching to human ground-truth maps prior to computing SIM and EMD
- note: old performances still available here
- added code for optimizing center, blur, and histogram matching to a human fixation distribution
- sample images added to page
18/09/2014
- code added for eDN
17/09/2014
- added Outlier Saliency (OS) model
13/08/2014
- added code to calculate degrees of visual angle
12/08/2014
- added 4 additional metrics (AUC-Borji, sAUC, CC, NSS) to results table
- created git repository to house matlab code
- added new 2014 TPAMI paper for Region Contrast model
11/08/2014
- added baseline 'one human' (i.e. 1 vs 38), to differentiate from baseline 'infinite' humans (i.e. inf vs inf)
- added Holzbach's "Sampled Template Collation" model
08/08/2014
- new model sent in by Shunji Satoh
07/08/2014
- model performances table is now sortable by different metrics
- added a new baseline: permutation control, based on Koehler et al. [JoV 2014]
29/07/2014
- two new saliency datasets have been included from Qi Zhao (corresponding to two ECCV 2014 papers)
- Matthieu Perreira Da Silva's Visual Conspicuity (VICO) code included
05/2014 - 07/2014
- all models have been retested, and latest timestamps included with results
AIM, GBVS, SUN, IttiKoch1, and IttiKoch2 performances have slightly improved since first tested (latest code used, download date included)
06/2014
- new website launched at http://saliency.mit.edu; added comparison table for other saliency datasets
01/2012
- website first launched at http://people.csail.mit.edu/tjudd/SaliencyBenchmark