The analysis pipeline for our paper 'Functional connectivity fingerprints of the frontal eye field and inferior frontal junction suggest spatial versus nonspatial processing in the prefrontal cortex'.
-
Updated
Sep 30, 2023 - MATLAB
The analysis pipeline for our paper 'Functional connectivity fingerprints of the frontal eye field and inferior frontal junction suggest spatial versus nonspatial processing in the prefrontal cortex'.
Image captioning with Visual Attention
Where do people look on images in average? At rare, thus surprising things! Let's compute them automatically
RARE2007 is a feature-engineered bottom-up salienct model only using color information (no orientation)
Code for "Multiple decisions about one object involve parallel sensory acquisition but time-multiplexed evidence incorporation"
A public version of my TvL experiment for my 2021 RSI project. https://link.springer.com/article/10.3758/s13414-022-02503-5
A project aimed at analysing experimental human eye data to better understand the spatiotemporal dynamics of covert attention.
Official Code for 'Exploring Language Prior for Mode-Sensitive Visual Attention Modeling' (ACM MM 2020)
Official Implementation for NeurIPS 2023 Paper "What Do Deep Saliency Models Learn about Visual Attention"
Deep Neural Network Image Captioner using visual Attention
We present SCENE-pathy, a dataset and a set of baselines to study the visual selective attention (VSA) of people towards the 3D scene in which they are located
RARE2012 is a feature-engineered bottom-up visual attention model
Image captioning of Flickr 8k dataset using Attention and Merge model
STNet: Selective Tuning of Convolutional Networks for Object Localization
Implemenetation of 2016 paper "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" on Flick30k dataset.
The scope of this research is to determine if there is any correlation between the level of experience of surgeons and their visual attention while performing surgeries.
ETTO (Eye-Tracking Through Objects) and EToCVD (Eye-Tracking of Colour Vision Deficiencies) datasets are shared with all who might be interested in working on Visual Attention/Visual Saliency.
Visual Attention : what is salient in an image with DeepRare2019
A model of mixed neural networks for step-by-step processing of dynamic visual scenes, activity recognition and behavioral prediciton
Add a description, image, and links to the visual-attention topic page so that developers can more easily learn about it.
To associate your repository with the visual-attention topic, visit your repo's landing page and select "manage topics."