Instructions:
1) Download the 2000 test images, organized by stimulus category.
2) Run your model to create saliency maps of each image. The saliency maps should be .jpg images of the same size and name as the original images, and should preserve original directory structure (i.e. 100 images in 20 directories).
3) Submit your maps to saliency@mit.edu (as a zip or tar folder with the name of your model as the folder name).
4) We run the scoring metrics to compare how well your saliency maps predict where 24 observers looked on the images. Because we do not make the fixations public (to avoid any model from being trained on the data), it is not possible to score the model on your own. For reference, you can see the Matlab code we use to score models.
5) We post your score and model details on this page.
6) Let us know if you have a publication, website, or publicly available code for your model that we can link to your score in the chart above.
7) If you would like to submit results for the 2000 train images as well, we can report how well you are able to predict the fixations of 6 held-out observers on these images.
Optimize your model
Given that we provide the fixation data of 18 observers on a set of 2000 train images, you can approximate how well you will do on the test data of the benchmark, and can use the train data for parameter optimization.
Models that have some blur and center bias, and those that are histogram-matched to the distribution of human fixations (i.e. not too dense or spare) tend to obtain better performances according to our evaluation protocol. You can also take a look at the sample optimization code to see how to select parameters. The evaluation code is the same as the one used for the MIT Saliency Benchmark.