Skip to content

Latest commit

 

History

History
78 lines (55 loc) · 7.18 KB

dialogue.md

File metadata and controls

78 lines (55 loc) · 7.18 KB

Dialogue

Dialogue is notoriously hard to evaluate. Past approaches have used human evaluation.

Dialogue act classification

Dialogue act classification is the task of classifying an utterance with respect to the fuction it serves in a dialogue, i.e. the act the speaker is performing. Dialogue acts are a type of speech acts (for Speech Act Theory, see Austin (1975) and Searle (1969)).

Switchboard corpus

The Switchboard-1 corpus is a telephone speech corpus, consisting of about 2,400 two-sided telephone conversation among 543 speakers with about 70 provided conversation topics. The dataset includes the audio files and the transcription files, as well as information about the speakers and the calls.

The Switchboard Dialogue Act Corpus (SwDA) [download] extends the Switchboard-1 corpus with tags from the SWBD-DAMSL tagset, which is an augmentation to the Discourse Annotation and Markup System of Labeling (DAMSL) tagset. The 220 tags were reduced to 42 tags by clustering in order to improve the language model on the Switchboard corpus. A subset of the Switchboard-1 corpus consisting of 1155 conversations was used. The resulting tags include dialogue acts like statement-non-opinion, acknowledge, statement-opinion, agree/accept, etc.
Annotated example:
Speaker: A, Dialogue Act: Yes-No-Question, Utterance: So do you go to college right now?

Model Accuracy Paper / Source Code
CRF-ASN (Chen et al., 2018) 81.3 Dialogue Act Recognition via CRF-Attentive Structured Network
Bi-LSTM-CRF (Kumar et al., 2017) 79.2 Dialogue Act Sequence Labeling using Hierarchical encoder with CRF Link
RNN with 3 utterances in context (Bothe et al., 2018) 77.34 A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks

ICSI Meeting Recorder Dialog Act (MRDA) corpus

The MRDA corpus [download] consists of about 75 hours of speech from 75 naturally-occurring meetings among 53 speakers. The tagset used for labeling is a modified version of the SWBD-DAMSL tagset. It is annotated with three types of information: marking of the dialogue act segment boundaries, marking of the dialogue acts and marking of correspondences between dialogue acts.
Annotated example:
Time: 2804-2810, Speaker: c6, Dialogue Act: s^bd, Transcript: i mean these are just discriminative.
Multiple dialogue acts are separated by "^".

Model Accuracy Paper / Source Code
CRF-ASN (Chen et al., 2018) 91.7 Dialogue Act Recognition via CRF-Attentive Structured Network
Bi-LSTM-CRF (Kumar et al., 2017) 90.9 Dialogue Act Sequence Labeling using Hierarchical encoder with CRF Link

Dialogue state tracking

Dialogue state tacking consists of determining at each turn of a dialogue the full representation of what the user wants at that point in the dialogue, which contains a goal constraint, a set of requested slots, and the user's dialogue act.

Second dialogue state tracking challenge

For goal-oriented dialogue, the dataset of the second Dialogue Systems Technology Challenges (DSTC2) is a common evaluation dataset. The DSTC2 focuses on the restaurant search domain. Models are evaluated based on accuracy on both individual and joint slot tracking.

Model Request Area Food Price Joint Paper / Source
Zhong et al. (2018) 97.5 - - - 74.5 Global-locally Self-attentive Dialogue State Tracker
Liu et al. (2018) - 90 84 92 72 Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems
Neural belief tracker (Mrkšić et al., 2017) 96.5 90 84 94 73.4 Neural Belief Tracker: Data-Driven Dialogue State Tracking
RNN (Henderson et al., 2014) 95.7 92 86 86 69 Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised gate

Go back to the README

Wizard-of-Oz

The WoZ 2.0 dataset is a newer dialogue state tracking dataset whose evaluation is detached from the noisy output of speech recognition systems. Similar to DSTC2, it covers the restaurant search domain and has identical evaluation.

Model Request Joint Paper / Source
Zhong et al. (2018) 97.1 88.1 Global-locally Self-attentive Dialogue State Tracker
Neural belief tracker (Mrkšić et al., 2017) 96.5 84.4 Neural Belief Tracker: Data-Driven Dialogue State Tracking
RNN (Henderson et al., 2014) 87.1 70.8 Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised gate

Retrieval-based Chatbot

The main task retrieval-based chatbot is response selection, that aims to find correct responses from a pre-defined index.

Ubuntu Corpus

The Ubuntu Corpus contains almost 1 million multi-turn dialogues from the Ubuntu Chat Logs. The task of Ubuntu Corpus is to select the correct response from 10 candidates (others are negatively sampled) by considering previous conversation history. You can find more details at here. The Evaluation metric is recall at position K in N candidates (Recall_N@K).

Model R_2@1 R_10@1 Paper / Source
DAM (Zhou et al. 2018) 93.8 76.7 Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network
SMN (Wu et al. 2017) 92.3 72.3 Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots
Multi-View (Zhou et al. 2017) 90.8 66.2 Multi-view Response Selection for Human-Computer Conversation
Bi-LSTM (Kadlec et al. 2015) 89.5 63.0 Improved Deep Learning Baselines for Ubuntu Corpus Dialogs