diff --git a/README.md b/README.md index fafe918..edb6afe 100644 --- a/README.md +++ b/README.md @@ -37,7 +37,7 @@ Below is the overview of IAI MovieBot 2.0 architecture. Blue components are inhe - Telegram - Flask REST - Flask socket.io - * Natural Language Understanding (NLU) [[doc](https://iai-moviebot.readthedocs.io/en/latest/architecture.html#natural-language-understanding)] + * Natural Language Understanding (NLU) [[doc](https://iai-moviebot.readthedocs.io/en/latest/nlu.html)] - Rule-based - JointBERT * Dialogue management @@ -53,7 +53,7 @@ Below is the overview of IAI MovieBot 2.0 architecture. Blue components are inhe Training utilities: - * NLU training (JointBERT) + * NLU training (JointBERT) [[doc](https://iai-moviebot.readthedocs.io/en/latest/nlu.html#training-the-jointbert-model)] * Reinforcement learning training (DQN and A2C) using a user simulator [[doc](https://iai-moviebot.readthedocs.io/en/latest/reinforcement_learning.html)] ## Demos diff --git a/docs/source/architecture.rst b/docs/source/architecture.rst index e05f96a..78d408a 100644 --- a/docs/source/architecture.rst +++ b/docs/source/architecture.rst @@ -3,50 +3,15 @@ System Architecture This page provides a high-level overview of the architecture of our system. At this level of abstraction, our system constitutes a domain-independent framework for facilitating conversational item recommendation. Thus, even though we will be using movie-related examples for illustration, it is straightforward to adapt the system to other domains. -The system architecture is shown in the figure below, illustrating the core process for each dialogue turn. +The overview of the system architecture is shown in the figure below. -.. image:: _static/Blueprint_MovieBot.png +.. image:: _static/Blueprint_MovieBot_v2.png +The main components of the system are: -Natural Language Understanding ------------------------------- - -The :py:class:`NLU ` component converts the natural language :py:class:`UserUtterance ` into a :py:class:`DialogueAct `. This process, comprising of *intent detection* and *slot filling*, is performed based on the current dialogue state. The component offers two distinct solutions as modules: Rule-Based and Neural using JointBERT. - -Rule-Based NLU -^^^^^^^^^^^^^^ - -The rule-based NLU module utilizes a combination of keyword extraction and heuristics to determine the user's intent and extract slot-value pairs from their utterances. This approach relies on predefined rules and patterns to interpret user input. - -Neural NLU with JointBERT -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The Neural NLU module employs JointBERT, a neural model trained for predicting both the intent of the user's utterance and the corresponding slot-value pairs. - -Training the JointBERT Model -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -To train the JointBERT model, the provided training script (`moviebot/nlu/annotation/joint_bert/joint_bert_train.py`) can be utilized. This script downloads a pre-trained BERT model (`bert-base-uncased`) and fine-tunes it on a dataset annotated with intents and slot-value pairs. Below is an overview of the training process: - -1. **Data Preparation**: Ensure the dataset is properly formatted with annotations for intents and slot-value pairs. The data path should be specified using the `--data_path` argument in the training script. - -Example dataset format: -```yaml -REVEAL: - - text: "[I absolutely adore](modifier) movies focusing on [martial arts](keywords)." - - text: "Films about [space exploration](keywords) [fascinate me](modifier)." - - text: "[I can't stand](modifier) movies that emphasize on [corporate politics](keywords)." - - text: "[Space adventures](keywords) [always intrigue me](modifier)." -``` - -2. **Model Initialization**: The model is initialized with the number of intent labels and slot labels based on the dataset. Additionally, hyperparameters such as learning rate, weight decay, and maximum epochs may be configured. - -3. **Training**: The training script supports logging with [Wandb](https://wandb.ai/site) for easy monitoring of training progress. - -4. **Model Saving**: After training, the trained model weights are saved to the specified output path (`--model_output_path`). Additionally, metadata including intent and slot names is saved in a JSON file for reference. - -6. **Usage**: Once trained, the JointBERT model can be integrated into the conversational system for natural language understanding tasks, by specifying the model path. - +- :doc:`Natural Language Understanding ` +- Dialogue Manager +- Natural Language Generation Dialogue Manager ---------------- diff --git a/docs/source/contact.rst b/docs/source/contact.rst deleted file mode 100644 index 70eab35..0000000 --- a/docs/source/contact.rst +++ /dev/null @@ -1,2 +0,0 @@ -Contact -======= \ No newline at end of file diff --git a/docs/source/dialogue.rst b/docs/source/dialogue.rst index db50789..eeb5cd3 100644 --- a/docs/source/dialogue.rst +++ b/docs/source/dialogue.rst @@ -16,31 +16,7 @@ User intents :py:class:`UserIntents ` -+------------------------+------------+ -| Intent | Description | -+========================+============+ -| Reveal | The user wants to reveal a preference. | -+------------------------+------------+ -| Inquire | Once the agent has recommended an item, the user can ask further details about it. | -+------------------------+------------+ -| Remove preference | The user wants to remove any previously stated preference. | -+------------------------+------------+ -| Reject | The user either has already seen/consumed the recommended item or does not like it. | -+------------------------+------------+ -| Accept | The user accepts (likes) the recommendation. This will determine the success of the system as being able to find a recommendation the user liked. | -+------------------------+------------+ -| Continue recommendation | If the user likes a recommendation, they can either restart, quit or continue the process to get a similar recommendation. | -+------------------------+------------+ -| Restart | The user wants to restart the recommendation process. | -+------------------------+------------+ -| Acknowledge | Acknowledge the agent's question where required. | -+------------------------+------------+ -| Deny | Negate the agent's question where required. | -+------------------------+------------+ -| Hi | When the user initiates the conversation, they start with a formal hi/hello or reveal preferences. | -+------------------------+------------+ -| Bye | End the conversation by sending a bye message or an exit command. | -+------------------------+------------+ +A detailed description is provided :doc:`here `. Agent intents """"""""""""" diff --git a/docs/source/index.rst b/docs/source/index.rst index 542dc16..f2e8b8b 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -13,7 +13,6 @@ The distinctive features of IAI MovieBot include a task-specific dialogue flow, architecture dialogue reinforcement_learning - contact Indices and tables diff --git a/docs/source/nlu.rst b/docs/source/nlu.rst new file mode 100644 index 0000000..268689d --- /dev/null +++ b/docs/source/nlu.rst @@ -0,0 +1,99 @@ +Natural Language Understanding +------------------------------ + +The natural language understanding (NLU) component converts incoming user utterances into dialogue acts. +A dialogue act is a structured representation comprising an intent and parameters. A parameter is a triplet including a slot, an operator, and a value. Note that the user intents and slots, that are domain specific, are pre-defined and can easily be modified to fit a new domain. +The NLU process is divided into two steps: intent classification and slot filling. + +Two types of NLU component are available: + +- :py:class:`RuleBasedNLU ` +- :py:class:`NeuralNLU ` + +Rule-Based NLU +-------------- + +The rule-based NLU module utilizes a combination of keyword extraction and heuristics to determine the user's intent and extract slot-value pairs from their utterances. This approach relies on predefined rules and patterns to interpret user input. + +Neural NLU with JointBERT +------------------------- + +The Neural NLU module employs a JointBERT model with a CRF layer [1]_ to extract both the intent of the user's utterance and the corresponding slot-value pairs. + +Training the JointBERT Model +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To train the JointBERT model, the provided training script (`moviebot/nlu/annotation/joint_bert/joint_bert_train.py`) can be utilized. This script downloads a pre-trained BERT model (`bert-base-uncased`) and fine-tunes it on a dataset annotated with intents and slot-value pairs. Below is an overview of the training process: + +1. **Data Preparation**: Ensure the dataset is properly formatted with annotations for intents and slot-value pairs. The data path should be specified using the `--data_path` argument in the training script. + +Example dataset format: + +.. code-block:: yaml + + REVEAL: + - text: "[I absolutely adore](modifier) movies focusing on [martial arts](keywords)." + - text: "Films about [space exploration](keywords) [fascinate me](modifier)." + - text: "[I can't stand](modifier) movies that emphasize on [corporate politics](keywords)." + - text: "[Space adventures](keywords) [always intrigue me](modifier)." + +2. **Model Initialization**: The model is initialized with the number of intent labels and slot labels based on the dataset. Additionally, hyperparameters such as learning rate, weight decay, and maximum epochs may be configured. + +3. **Training**: The training script supports logging with `Wandb ` for easy monitoring of training progress. + +4. **Model Saving**: After training, the trained model weights are saved to the specified output path (`--model_output_path`). Additionally, metadata including intent and slot names is saved in a JSON file for reference. + +6. **Usage**: Once trained, the JointBERT model can be integrated into the conversational system for natural language understanding tasks, by specifying the model path. + +User Intents +------------ + +:py:class:`UserIntents ` + ++--------------------------+----------------------------------------------+ +| Intent | Description | ++==========================+==============================================+ +| Reveal | The user wants to reveal a preference. | ++--------------------------+----------------------------------------------+ +| Inquire | Once the agent has recommended an item, | +| | the user can ask further details about it. | ++--------------------------+----------------------------------------------+ +| Remove preference | The user wants to remove any previously | +| | stated preference. | ++--------------------------+----------------------------------------------+ +| Reject | The user either has already seen/consumed | +| | the recommended item or does not like it. | ++--------------------------+----------------------------------------------+ +| Accept | The user accepts (likes) the recommendation. | +| | This will determine the success of the system| +| | as being able to find a satisfying | +| | recommendation. | ++--------------------------+----------------------------------------------+ +| Continue recommendation | If the user likes a recommendation, they can | +| | either restart, quit, or continue the process| +| | to get a similar recommendation. | ++--------------------------+----------------------------------------------+ +| Restart | The user wants to restart the recommendation | +| | process. | ++--------------------------+----------------------------------------------+ +| Acknowledge | Acknowledge the agent's question where | +| | required. | ++--------------------------+----------------------------------------------+ +| Deny | Negate the agent's question where required. | ++--------------------------+----------------------------------------------+ +| Hi | When the user initiates the conversation, | +| | they start with a formal hi/hello or reveal | +| | preferences. | ++--------------------------+----------------------------------------------+ +| Bye | End the conversation by sending a bye message| +| | or an exit command. | ++--------------------------+----------------------------------------------+ + +Slots +----- + +The slots are defined in the enumeration :py:class:`Slots `. Note that some of these slots cannot be filled upon the reception of a user utterance, such as `imdb_link` and `cover_image`. + +**References** + +.. [1] Chen, Q., Zhuo, Z., & Wang, W. (2019). BERT for Joint Intent Classification and Slot Filling. arXiv preprint arXiv:1902.10909. \ No newline at end of file