This tool can be used to find the most influential words on a document. We define most influential as the words that influence a trained classifier the most to give it a particular classification.
We use a Convolutional Neural Network model, as suggested by Keras and others, that can classify IMDB and Wikileaks documents with the following accuracies (50/50 train/test split):
Dataset | Dataset Class and Size Details | Training Accuracy | Testing Accuracy |
---|---|---|---|
IMDB | 25K positive, 25K negative reviews | 84% | 83% |
Wikileaks (2-way) | 25K unclassified, 25K classified documents | 95% | 95% |
Wikileaks (3-way imbalanced) | 25K unclassified, 25K classified, 12K secret documents | 81% | 80% |
To use this tool, you must have Keras installed with a TensorFlow backend.
- To install TensorFlow, follow these instructions: https://www.tensorflow.org/install/
- To install Keras, follow these instructions: https://keras.io/#installation
- To install NLTK, follow these instructions: https://www.nltk.org/install.html
NLTK will prompt you to download stopwords and WordNetLemmatizer. To do so, run these commands on a python interpreter:
>>> import nltk
>>> nltk.download('stopwords')
>>> nltk.download('wordnet')
- To install Numpy, follow these instructions: https://www.scipy.org/install.html
- To install TQDM, follow these instructions: https://github.com/tqdm/tqdm#installation
- To install Scikit-Learn, follow these instructions: http://scikit-learn.org/stable/install.html
-
The IMDB dataset test script is currently being fixed to be added here, but the wikileaks.py can be modified to work with Keras's IMDB dataset.
-
For the Wikileaks dataset test, follow these instructions:
- To use the pre-trained model for 2-way classification, run:
python wikileaks.py
- To use the pre-trained model for 3-way classification, run:
python wikileaks.py --num-classes 3
- To run the test from the raw cables.csv file, follow these steps:
- Download the cables.csv file from the Internet Archive, and then place it inside of the dataset/wikileaks/ folder.
- For unclassified and confidential documents, run the bash script in dataset/wikileaks/2-way/prepare_dataset.sh:
./prepare_dataset.sh
- For unclassified, confidential and secret documents (with an unbalanced secret class: 25K unclassified, 25K classified, 12K secret), run the bash script in dataset/wikileaks/3-way/prepare_dataset.sh.
- Once the dataset has been prepared, you may run
wikileaks.py
as described in 1 and 2.
- Download the cables.csv file from the Internet Archive, and then place it inside of the dataset/wikileaks/ folder.
-
For a new project, refer to one of the existing scripts and modify it accordingly. Things to keep in mind for new datasets:
- The scrips expect the data to be organized in two numpy arrays of samples, X, and labels, y. The samples are arrays of word indices. For example, a sample review would be [1, 3, 400, 83, ..., 5]. And labels are class indices, for example, [3].
- To handle other requirements that are not unclassified (class 0), confidential (class 1) and secret (class 2): refer to utils/influential_vocab.py to set the target class according to your dataset.
If you have any requests or problems, please submit an issue above with your dataset details and needs.
Keras provides a functionality where one can assign class weights to resolve the issue of under-represented classes in the dataset. We provide the functionality in the code to do this, if desired. Be aware that the weights to be assigned to each class must be tuned accordingly.
To use this functionality, the model will have to be created and trained from scratch, since the pre-trained was not prepared using class weights:
python wikileaks.py --num-classes 3 --pre-trained False --use-class-weights True
- Finish README.md
If you would like more functionality to be added or find any bugs, please submit an issue on this page. Thank you!