It's been good... The MIT Saliency Benchmark (EST. 2012) has transitioned hands.
The new benchmark can be found at https://saliency.tuebingen.ai.

MIT Saliency Benchmark Results: CAT2000

The following are results of models evaluated on their ability to predict ground truth human fixations on our benchmark data set containing 2000 images from 20 different categories with eye tracking data from 24 observers. We post the results here and provide a way for people to submit new models for evaluation.

citations

If you use any of the results or data on this page, please cite the following:

@misc{mit-saliency-benchmark,
   author       = {Zoya Bylinskii and Tilke Judd and Ali Borji and Laurent Itti and Fr{\'e}do Durand and Aude Oliva and Antonio Torralba},
   title        = {MIT Saliency Benchmark},
   howpublished = {http://saliency.mit.edu/}
}
@article{CAT2000,
   title     = {CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research},
   author    = {Borji, Ali and Itti, Laurent},
   journal   = {CVPR 2015 workshop on "Future of Datasets"},
   year      = {2015},
   note      = {arXiv preprint arXiv:1505.03581}
}
@article{salMetrics_Bylinskii,
    title    = {What do different evaluation metrics tell us about saliency models?},
    author   = {Zoya Bylinskii and Tilke Judd and Aude Oliva and Antonio Torralba and Fr{\'e}do Durand},
    journal  = {arXiv preprint arXiv:1604.03605},
    year     = {2016}
}

Images

2000 test images (the fixations from 24 viewers per image are not public such that no model can be trained using this data set).
2000 train images with fixations of 18 observers (another 6 observers per image are held out).

Model Performances

Model Visualizations

31 models, 5 baselines, 8 metrics, and counting...

Matlab code for the metrics we use.

Sorted by: metric


NOTE: MIT Saliency Benchmark will soon switch to sorting model performances by NSS
This decision has been made at ECCV 2016 saliency tutorial. See:
Z Bylinskii, T Judd, A Oliva, A Torralba, F Durand What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605, 2016
M Kümmerer, T Wallis, M Bethge Information-theoretic model comparison unifies saliency metrics PNAS, 112(52), 16054-16059, 2015

Model Name Published Code AUC-Judd [?] SIM [?] EMD [?] AUC-Borji [?] sAUC [?] CC [?] NSS [?] KL [?] Date tested [key] Sample [img]
Baseline: infinite humans [?] 0.90 1 0 0.84 0.62 1 2.85 0 Complete results
Judd Model Tilke Judd, Krista Ehinger, Fredo Durand, Antonio Torralba. Learning to predict where humans look [ICCV 2009] matlab 0.84 0.46 3.60 0.84 0.56 0.54 1.30 0.94 last tested: 26/01/2015
maps from code (DL:17/12/2013) with default params
Complete results
Graph-Based Visual Saliency (GBVS) Jonathan Harel, Christof Koch, Pietro Perona. Graph-Based Visual Saliency [NIPS 2006] matlab 0.80 0.51 2.99 0.79 0.58 0.50 1.23 0.80 last tested: 26/01/2015
maps from code (DL:20/08/2013) with default params
Complete results
Baseline: Center [?] matlab 0.83 0.42 4.31 0.81 0.50 0.46 1.06 1.13 Complete results
Baseline: Chance [?] matlab 0.50 0.32 5.30 0.50 0.50 0.00 0.00 2.00 Complete results
Baseline: Permutation Control [?] 0.80 0.55 2.25 0.71 0.50 0.63 1.63 2.42 Complete results
Baseline: one human [?] 0.76
min: 0.39
max: 0.95
0.43
min: 0.00
max: 0.78
2.51
min: 0.00
max: 16.57
0.67
min: 0.45
max: 0.92
0.56
min: 0.38
max: 0.86
0.56
min: -0.13
max: 0.96
1.54
min: -0.31
max: 5.50
7.77
min: 0.81
max: 23.81
Complete results
IttiKoch2 Implementation by Jonathan Harel (part of GBVS toolbox) matlab 0.77 0.48 3.44 0.76 0.59 0.42 1.06 0.92 last tested: 26/01/2015
maps from code (DL:20/08/2013) with default params
Complete results
Context-Aware saliency Stas Goferman, Lihi Zelnik-Manor, Ayellet Tal. Context-Aware Saliency Detection [CVPR 2010] [PAMI 2012] matlab 0.77 0.50 3.09 0.76 0.60 0.42 1.07 1.04 last tested: 26/01/2015
maps from code (DL:15/01/2014) with default params
Complete results
Adaptive Whitening Saliency Model (AWS) Anton Garcia-Diaz, Victor Leboran, Xose R. Fdez-Vidal, Xose M. Pardo. On the relationship between optical variability, visual saliency, and eye fixations: A computational approach [JoV 2012] matlab 0.76 0.49 3.36 0.75 0.61 0.42 1.09 0.94 last tested: 26/01/2015
maps from code (DL:17/01/2014) with params: rescale=0.5
Complete results
Weighted Maximum Phase Alignment Model (WMAP) Fernando Lopez-Garcia, Xose Ramon Fdez-Vidal, Xose Manuel Pardo, Raquel Dosil. Scene Recognition through Visual Attention and Image Features: A Comparison between SIFT and SURF Approaches matlab 0.75 0.47 3.28 0.69 0.60 0.38 1.01 1.65 last tested: 26/01/2015
maps from code (DL:17/01/2014) with params: rescale=0.5
Complete results
Murray model (Chromatic Induction Wavelet Model) Naila Murray, Maria Vanrell, Xavier Otazu, C. Alejandro Parraga. Saliency Estimation Using a Non-Parametric Low-Level Vision Model [CVPR 2011] matlab 0.70 0.43 3.79 0.70 0.59 0.30 0.77 1.14 last tested: 26/01/2015
maps from code (DL:29/05/2014) with default params
Complete results
Torralba saliency Antonio Torralba, Aude Oliva, Monica S. Castelhano, John M. Henderson. Contextual Guidance of Attention in Natural scenes: The role of Global features on object search [Psychological Review 2006] matlab 0.72 0.45 3.44 0.71 0.58 0.33 0.85 1.60 last tested: 26/01/2015
maps from code (here) with default params
Complete results
SUN saliency Lingyun Zhang, Matthew H. Tong, Tim K. Marks, Honghao Shan, Garrison W. Cottrell. SUN: A Bayesian framework for saliency using natural statistics [JoV 2008] matlab 0.70 0.43 3.42 0.69 0.57 0.30 0.77 2.22 last tested: 26/01/2015
maps from code (DL:15/01/2014) with params: scale=0.5
Complete results
IttiKoch Implemented in the Saliency Toolbox by: Dirk Walther, Christof Koch. Modeling attention to salient proto-objects [Neural Networks 2006] matlab 0.56 0.34 4.66 0.53 0.52 0.09 0.25 6.71 last tested: 26/01/2015
maps from code (DL:15/01/2014) with params: sampleFactor='dyadic'
Complete results
Achanta Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, Sabine Susstrunk. Frequency-tuned Salient Region Detection [CVPR 2009] matlab, c++, executable 0.57 0.33 4.46 0.55 0.52 0.11 0.29 2.31 last tested: 26/01/2015
maps from code (DL:15/01/2014) with params: GausParam=[3,3]
Complete results
Aboudib Magnification Saliency (Bottom-up v2) Ala Aboudib, Vincent Gripon, Gilles Coppin. Unpublished work. python 0.81 0.58 2.10 0.77 0.55 0.64 1.57 1.41 first tested: 22/04/2015
last tested: 22/04/2015
maps from authors
Complete results
DeepFix Srinivas S S Kruthiventi, Kumar Ayush, R. Venkatesh Babu. DeepFix: A Fully Convolutional Neural Network for predicting Human Eye Fixations [arXiv 2015] 0.87 0.74 1.15 0.81 0.58 0.87 2.28 0.37 last tested: 02/10/2015 maps from authors
Complete results
Ensembles of Deep Networks (eDN) Eleonora Vig, Michael Dorr, David Cox. Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images [CVPR 2014] python 0.85 0.52 2.64 0.84 0.55 0.54 1.30 0.97 last tested: 01/10/2015 maps from authors
Complete results
RARE2012- Improved Pierre Marighetto, Nicolas Riche, Matei Mancas. LSUN SALICON Challenge (http://lsun.cs.princeton.edu/leaderboard/#saliencysalicon) Improved from: Matlab 0.82 0.54 2.72 0.81 0.59 0.57 1.44 0.76 tested: 05/10/2015 maps from authors
Complete results
Boolean Map based Saliency (BMS) Jianming Zhang, Stan Sclaroff.Saliency detection: a boolean map approach [ICCV 2013, PAMI 2015] matlab, executable 0.85 0.61 1.95 0.84 0.59 0.67 1.67 0.83 tested: 05/10/2015 maps from authors
Complete results
Fast and Efficient Saliency (FES) Hamed Rezazadegan Tavakoli, Esa Rahtu, Janne Heikkila. Fast and efficient saliency detection using sparse sampling and kernel density estimation [SCIA 2011] matlab 0.82 0.57 2.24 0.76 0.54 0.64 1.61 2.10
last tested: 18/10/2015
maps from authors
Complete results
AIM Neil Bruce, John Tsotsos. Attention based on information maximization [JoV 2007] matlab 0.76 0.44 3.69 0.75 0.60 0.36 0.89 1.13 last tested: 23/09/2014
maps from code (DL:15/01/2014) with params: resize=0.5, convolve=1, thebasis='31infomax975'
Complete results
SDDPM Navid Rabbani, Behzad Nazari, Saeid Sadri, Reyhaneh Rikhtehgaran. Efficient Bayesian approach to saliency detection based on Dirichlet process mixture [IET IP 2017] 0.81 0.52 2.31 0.80 0.54 0.51 1.22 1.44 first tested: 20/01/2016
last tested: 20/01/2016
maps from authors
Complete results
iSEEL Hamed R.-Tavakoli et al. Exploiting inter-image similarity and ensemble of extreme learners for fixation prediction using deep features [arXiv 2016] Matlab 0.84 0.62 1.78 0.81 0.59 0.66 1.67 0.92 first tested: 11/10/2016
last tested: 11/10/2016
maps from authors
Complete results
Saliency Attentive Model (SAM-ResNet) Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, Rita Cucchiara. Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model [IEEE TIP 2018] python 0.88 0.77 1.04 0.80 0.58 0.89 2.38 0.56 first tested: 30/10/2016
last tested: 03/03/2017
maps from authors
Complete results
Saliency Attentive Model (SAM-VGG) Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, Rita Cucchiara. Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model [IEEE TIP 2018] python 0.88 0.76 1.07 0.79 0.58 0.89 2.38 0.54 first tested: 30/10/2016
last tested: 03/03/2017
maps from authors
Complete results
LDS Shu Fang, Jia Li, Yonghong Tian, Tiejun Huang, Xiaowu Chen. Learning Discriminative Subspaces on Random Contrasts for Image Saliency Analysis [TNNLS 2016] Matlab 0.83 0.58 2.09 0.79 0.56 0.62 1.54 0.79 first tested: 28/09/2016
last tested: 28/09/2016
maps from authors
Complete results
MixNet [anonymous] 0.86 0.66 1.63 0.82 0.58 0.76 1.92 0.62 first tested: 30/10/2016
last tested: 30/10/2016
maps from authors
Complete results
Eye Movement Laws (EYMOL) Dario Zanca, Marco Gori. Variational Laws of Visual Attention for Dynamic Scenes. [NIPS 2017] python 0.83 0.61 1.91 0.76 0.51 0.72 1.78 1.67 first tested: 23/02/2017
last tested: 03/03/2017
maps from authors
Complete results
SeeGAN William Edward Hahn, Elan Barenholtz, Mark Lenson​ 0.85 0.68 1.45 0.78 0.56 0.81 2.11 1.27 first tested: 02/22/2018
last tested: 02/22/2018
maps from authors
Complete results
Saliency Map Synthesized by Pseudo-inverse Matrix Nobuhide Matsuo, Shunji Satoh 0.84 0.64 1.72 0.80 0.54 0.76 1.93 0.85 first tested: 20/03/2018
last tested: 20/03/2018
maps from authors
Complete results
FENG-GUI (FG) Rafael Mizrahi demo 0.83 0.43 4.18 0.81 0.56 0.49 1.20 1.07 first tested: 19/04/2018
last tested: 24/06/2018
maps from authors
Complete results
EML-NET Sen Jia. EML-NET: An Expandable Multi-Layer NETwork for Saliency Prediction [arXiv 2018] 0.87 0.75 1.05 0.79 0.59 0.88 2.38 0.96 first tested: 20/03/2018
last tested: 20/03/2018
maps from authors
Complete results
CEDNS Chunhuan Lin, Fei Qi, Guangming Shi, Hao Li 0.88 0.73 1.27 0.74 0.58 0.85 2.39 0.34 first tested: 24/06/2018
last tested: 24/06/2018
maps from authors
Complete results
MSI-Net Alexander Kroner, Mario Senden, Kurt Driessens, Rainer Goebel. Contextual Encoder-Decoder Network for Visual Saliency Prediction [arXiv 2019] Python 0.88 0.75 1.07 0.82 0.59 0.87 2.30 0.36 first tested: 06/12/2018
last tested: 06/12/2018
maps from authors
Complete results

Model Performances broken down by stimuli category

The test images are comprised of 100 images of each of 20 different stimuli categories. Results are averaged across all images of all categories. To see model scores broken down per category, click on the 'Complete results' link provided for each model (in the table above).