Skip to content

From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction

License

Notifications You must be signed in to change notification settings

imatge-upc/sentiment-2017-imavis

Repository files navigation

From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction

Image and Vision Computing

Víctor Campos Brendan Jou Xavier Giro-i-Nieto
Víctor Campos Brendan Jou Xavier Giro-i-Nieto

A joint collaboration between:

logo-bsc logo-upc logo-gpi logo-columbia logo-dvmmlab
Barcelona Supercomputing Center (BSC) Universitat Politecnica de Catalunya (UPC) UPC Image Processing Group Columbia University Digital Video and Multimedia Lab (DVMM)

Abstract

Visual multimedia have become an inseparable part of our digital social lives, and they often capture moments tied with deep affections. Automated visual sentiment analysis tools can provide a means of extracting the rich feelings and latent dispositions embedded in these media. In this work, we explore how Convolutional Neural Networks (CNNs), a now de facto computational machine learning tool particularly in the area of Computer Vision, can be specifically applied to the task of visual sentiment prediction. We accomplish this through fine-tuning experiments using a state-of-the-art CNN and via rigorous architecture analysis, we present several modifications that lead to accuracy improvements over prior art on a dataset of images from a popular social media platform. We additionally present visualizations of local patterns that the network learned to associate with image sentiment for insight into how visual positivity (or negativity) is perceived by the model.

Publication

Our article can be found on ScienceDirect. A preprint is publicly available on arXiv as well. You can also find it indexed on gitxiv.

Please cite with the following Bibtex code:

@article{campos2017from,
  title={From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction},
  author={Campos, Victor and Jou, Brendan and Giro-i-Nieto, Xavier},
  journal={Image and Vision Computing},
  year={2017}
}

You may also want to refer to our publication with the more human-friendly APA style:

Campos, V., Jou, B., & Giro-i-Nieto, X. (2017, February). From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction. Image and Vision Computing.

Sentiment Maps

Sentiment maps

Data

The Twitter dataset used in our experiments was originally available from this URL from Rochester University. On September 2021, we noticed that it is no longer available, so we provide this mirror (57.4 MB).

Models

The weights for the best CNN model can be downloaded from here (217 MB). These same weights, modified to fit the fully convolutional architecture used to generate the sentiment maps, can be downloaded from here (217 MB).

The deep network was developed over Caffe by Berkeley Vision and Learning Center (BVLC). You will need to follow these instructions to install Caffe.

How to re-train the models ?

We do not provide training code because we used Caffe's command line tool to train the models. Please see the framework's website for more details on how to download pre-trained models and fine-tune them on your data. Besides the trained models that can be used for inference, our repo provides text files with (image_id, label) tuples for all cross-validation splits in the paper. These can be used to train the model, but you will need to download the dataset from the project site first.

Acknowledgments

We would like to especially thank Albert Gil and Josep Pujal from our technical support team at the Image Processing Group at UPC and Carlos Tripiana from the technical support team at the Barcelona Supercomputing Center.

AlbertGil-photo JosepPujal-photo CarlosTripiana-photo
Albert Gil Josep Pujal Carlos Tripiana
This work has been supported by the grant SEV2015-0493 of the Severo Ochoa Program awarded by Spanish Government, project TIN2015-65316 by the Spanish Ministry of Science and Innovation contracts 2014-SGR-1051 by Generalitat de Catalunya logo-severo
We gratefully acknowledge the support of NVIDIA Corporation through the BSC/UPC NVIDIA GPU Center of Excellence. logo-gpu_excellence_center
The Image ProcessingGroup at the UPC is a SGR14 Consolidated Research Group recognized and sponsored by the Catalan Government (Generalitat de Catalunya) through its AGAUR office. logo-catalonia
This work has been developed in the framework of the project BigGraph TEC2013-43935-R, funded by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF). logo-spain

Contact

If you have any general doubt about our work or code which may be of interest for other researchers, please use the public issues section on this github repo.

Releases

No releases published

Packages

No packages published

Languages