Instructions:
1) Download our 300 images.
2) Run your model to create saliency maps of each image. The saliency maps should be .jpg images of the same size and name as the original images.
3) Submit your maps to saliency@mit.edu (as a zip or tar folder with the name of your model as the folder name).
4) We run the scoring metrics to compare how well your saliency maps predict where 39 observers looked on the images. Because we do not make the fixations public (to avoid any model from being trained on the data), it is not possible to score the model on your own. For reference, you can see the Matlab code we use to score models.
5) We post your score and model details on this page.
6) Let us know if you have a publication, website, or publicly available code for your model that we can link to your score in the chart above.
Optimize your model
Models that have some blur and center bias, and those that are histogram-matched to the distribution of human fixations (i.e. not too dense or spare) tend to obtain better performances according to our evaluation protocol. To optimize these parameters you can use the MIT ICCV dataset, which comes with ground truth human fixations. We provide some sample optimization code to select parameters for a subset of this dataset. The evaluation code is the same as the one used for the MIT Saliency Benchmark. The ICCV data set is an appropriate testing choice because it closely matches the eyetracking set-up and data collection procedure that was used for the MIT Saliency Benchmark (same eyetracker, screen size, distance to viewer, image presentation times, image types, etc.).