-
Notifications
You must be signed in to change notification settings - Fork 288
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
5 changed files
with
1,578 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
<?xml version='1.0' encoding='UTF-8'?> | ||
<collection id="2024.aiwolfdial"> | ||
<volume id="1" ingest-date="2024-09-10" type="proceedings"> | ||
<meta> | ||
<booktitle>Proceedings of the 2nd International AIWolfDial Workshop</booktitle> | ||
<editor><first>Yoshinobu</first><last>Kano</last></editor> | ||
<publisher>Association for Computational Linguistics</publisher> | ||
<address>Tokyo, Japan</address> | ||
<month>September</month> | ||
<year>2024</year> | ||
<url hash="e633d020">2024.aiwolfdial-1</url> | ||
<venue>aiwolfdial</venue> | ||
</meta> | ||
<frontmatter> | ||
<url hash="3fb44190">2024.aiwolfdial-1.0</url> | ||
<bibkey>aiwolfdial-2024-1</bibkey> | ||
</frontmatter> | ||
<paper id="1"> | ||
<title><fixed-case>AIW</fixed-case>olf<fixed-case>D</fixed-case>ial 2024: Summary of Natural Language Division of 6th International <fixed-case>AIW</fixed-case>olf Contest</title> | ||
<author><first>Yoshinobu</first><last>Kano</last></author> | ||
<author><first>Yuto</first><last>Sahashi</last></author> | ||
<author><first>Neo</first><last>Watanabe</last></author> | ||
<author><first>Kaito</first><last>Kagaminuma</last></author> | ||
<author><first>Claus</first><last>Aranha</last></author> | ||
<author><first>Daisuke</first><last>Katagami</last></author> | ||
<author><first>Kei</first><last>Harada</last></author> | ||
<author><first>Michimasa</first><last>Inaba</last></author> | ||
<author><first>Takeshi</first><last>Ito</last></author> | ||
<author><first>Hirotaka</first><last>Osawa</last></author> | ||
<author><first>Takashi</first><last>Otsuki</last></author> | ||
<author><first>Fujio</first><last>Toriumi</last></author> | ||
<pages>1–12</pages> | ||
<abstract>We held our 6th annual AIWolf international contest to automatically play the Werewolf game “Mafia”, where players try finding liars via conversations, aiming at promoting developments in creating agents of more natural conversations in higher level, such as longer contexts, personal relationships, semantics, pragmatics, and logics, revealing the capabilities and limits of the generative AIs. In our Natural Language Division of the contest, we had eight Japanese speaking agent teams, and five English speaking agents, to mutually run games. By using the game logs, we performed human subjective evaluations, win rates, and detailed log analysis. We found that the entire system performance has largely improved over the previous year, due to the recent advantages of the LLMs. There are several new ideas to improve the way using LLMs such as the summarization, characterization, and the logics outside LLMs, etc. However, it is not perfect at all yet; the generated talks are sometimes inconsistent with the game actions. Our future work includes to reveal the capability of the LLMs, whether they can make the duality of the “liar”, in other words, holding a “true” and a “false” circumstances of the agent at the same time, even holding what these circumstances look like from other agents.</abstract> | ||
<url hash="0ca4803c">2024.aiwolfdial-1.1</url> | ||
<bibkey>kano-etal-2024-aiwolfdial</bibkey> | ||
</paper> | ||
<paper id="2"> | ||
<title>Text Generation Indistinguishable from Target Person by Prompting Few Examples Using <fixed-case>LLM</fixed-case></title> | ||
<author><first>Yuka</first><last>Tsubota</last></author> | ||
<author><first>Yoshinobu</first><last>Kano</last></author> | ||
<pages>13–20</pages> | ||
<abstract>To achieve smooth and natural communication between a dialogue system and a human, it is necessary for the dialogue system to behave more human-like. Recreating the personality of an actual person can be an effective way for this purpose. This study proposes a method to recreate a personality by a large language model (generative AI) without training, but with prompt technique to make the creation cost as low as possible. Collecting a large amount of dialogue data from a specific person is not easy and requires a significant amount of time for training. Therefore, we aim to recreate the personality of a specific individual without using dialogue data. The personality referred to in this paper denotes the image of a person that can be determined solely from the input and output of text dialogues. As a result of the experiments, it was revealed that by using prompts combining profile information, responses to few questions, and extracted speaking characteristics from those responses, it is possible to improve the reproducibility of a specific individual’s personality.</abstract> | ||
<url hash="6a66b9c7">2024.aiwolfdial-1.2</url> | ||
<bibkey>tsubota-kano-2024-text</bibkey> | ||
</paper> | ||
<paper id="3"> | ||
<title>Werewolf Game Agent by Generative <fixed-case>AI</fixed-case> Incorporating Logical Information Between Players</title> | ||
<author><first>Neo</first><last>Watanabe</last></author> | ||
<author><first>Yoshinobu</first><last>Kano</last></author> | ||
<pages>21–29</pages> | ||
<abstract>In recent years, AI models based on GPT have advanced rapidly. These models are capable of generating text, translating between different languages, and answering questions with high accuracy. However, the process behind their outputs remains a black box, making it difficult to ascertain the data influencing their responses. These AI models do not always produce accurate outputs and are known for generating incorrect information, known as hallucinations, whose causes are hard to pinpoint. Moreover, they still face challenges in solving complex problems that require step-by-step reasoning, despite various improvements like the Chain-of-Thought approach. There’s no guarantee that these models can independently perform logical reasoning from scratch, raising doubts about the reliability and accuracy of their inferences. To address these concerns, this study proposes the incorporation of an explicit logical structure into the AI’s text generation process. As a validation experiment, a text-based agent capable of playing the Werewolf game, which requires deductive reasoning, was developed using GPT-4. By comparing the model combined with an external explicit logical structure and a baseline that lacks such a structure, the proposed method demonstrated superior reasoning capabilities in subjective evaluations, suggesting the effectiveness of adding an explicit logical framework to the conventional AI models.</abstract> | ||
<url hash="5df0f585">2024.aiwolfdial-1.3</url> | ||
<bibkey>watanabe-kano-2024-werewolf</bibkey> | ||
</paper> | ||
<paper id="4"> | ||
<title>Enhancing Dialogue Generation in Werewolf Game Through Situation Analysis and Persuasion Strategies</title> | ||
<author><first>Zhiyang</first><last>Qi</last></author> | ||
<author><first>Michimasa</first><last>Inaba</last></author> | ||
<pages>30–39</pages> | ||
<abstract>Recent advancements in natural language processing, particularly with large language models (LLMs) like GPT-4, have significantly enhanced dialogue systems, enabling them to generate more natural and fluent conversations. Despite these improvements, challenges persist, such as managing continuous dialogues, memory retention, and minimizing hallucinations. The AIWolfDial2024 addresses these challenges by employing the Werewolf Game, an incomplete information game, to test the capabilities of LLMs in complex interactive environments. This paper introduces a LLM-based Werewolf Game AI, where each role is supported by situation analysis to aid response generation. Additionally, for the werewolf role, various persuasion strategies, including logical appeal, credibility appeal, and emotional appeal, are employed to effectively persuade other players to align with its actions.</abstract> | ||
<url hash="e6e7332a">2024.aiwolfdial-1.4</url> | ||
<bibkey>qi-inaba-2024-enhancing</bibkey> | ||
</paper> | ||
<paper id="5"> | ||
<title>Verification of Reasoning Ability using <fixed-case>BDI</fixed-case> Logic and Large Language Model in <fixed-case>AIW</fixed-case>olf</title> | ||
<author><first>Hiraku</first><last>Gondo</last></author> | ||
<author><first>Hiroki</first><last>Sakaji</last></author> | ||
<author><first>Itsuki</first><last>Noda</last></author> | ||
<pages>40–47</pages> | ||
<abstract>We attempt to improve the reasoning capability of LLMs in werewolf game by combining BDI logic with LLMs. While LLMs such as ChatGPT has been developed and used for various tasks, there remain several weakness of the LLMs. Logical reasoning is one of such weakness. Therefore, we try to introduce BDI logic-based prompts to verify the logical reasoning ability of LLMs in dialogue of werewofl game. Experiments and evaluations were conducted using “AI-Werewolf,” a communication game for AI with incomplete information. From the results of the game played by five agents, we compare the logical reasoning ability of LLMs by using the win rate and the vote rate against werewolf.</abstract> | ||
<url hash="2921e4e7">2024.aiwolfdial-1.5</url> | ||
<attachment type="Supplementary_Attachment" hash="a718feb1">2024.aiwolfdial-1.5.Supplementary_Attachment.zip</attachment> | ||
<bibkey>gondo-etal-2024-verification</bibkey> | ||
</paper> | ||
<paper id="6"> | ||
<title>Enhancing Consistency of Werewolf <fixed-case>AI</fixed-case> through Dialogue Summarization and Persona Information</title> | ||
<author><first>Yoshiki</first><last>Tanaka</last></author> | ||
<author><first>Takumasa</first><last>Kaneko</last></author> | ||
<author><first>Hiroki</first><last>Onozeki</last></author> | ||
<author><first>Natsumi</first><last>Ezure</last></author> | ||
<author><first>Ryuichi</first><last>Uehara</last></author> | ||
<author><first>Zhiyang</first><last>Qi</last></author> | ||
<author><first>Tomoya</first><last>Higuchi</last></author> | ||
<author><first>Ryutaro</first><last>Asahara</last></author> | ||
<author><first>Michimasa</first><last>Inaba</last></author> | ||
<pages>48–57</pages> | ||
<abstract>The Werewolf Game is a communication game where players’ reasoning and discussion skills are essential. In this study, we present a Werewolf AI agent developed for the AIWolfDial 2024 shared task, co-hosted with the 17th INLG. In recent years, large language models like ChatGPT have garnered attention for their exceptional response generation and reasoning capabilities. We thus develop the LLM-based agents for the Werewolf Game. This study aims to enhance the consistency of the agent’s utterances by utilizing dialogue summaries generated by LLMs and manually designed personas and utterance examples. By analyzing self-match game logs, we demonstrate that the agent’s utterances are contextually consistent and that the character, including tone, is maintained throughout the game.</abstract> | ||
<url hash="a58ee113">2024.aiwolfdial-1.6</url> | ||
<bibkey>tanaka-etal-2024-enhancing</bibkey> | ||
</paper> | ||
<paper id="7"> | ||
<title>An Implementation of Werewolf Agent That does not Truly Trust <fixed-case>LLM</fixed-case>s</title> | ||
<author><first>Takehiro</first><last>Sato</last></author> | ||
<author><first>Shintaro</first><last>Ozaki</last></author> | ||
<author><first>Daisaku</first><last>Yokoyama</last></author> | ||
<pages>58–67</pages> | ||
<abstract>Werewolf is an incomplete information game, which has several challenges when creating a computer agent as a player given the lack of understanding of the situation and individuality of utterance (e.g., computer agents are not capable of characterful utterance or situational lying). We propose a werewolf agent that solves some of those difficulties by combining a Large Language Model (LLM) and a rule-based algorithm. In particular, our agent uses a rule-based algorithm to select an output either from an LLM or a template prepared beforehand based on the results of analyzing conversation history using an LLM. It allows the agent to refute in specific situations, identify when to end the conversation, and behave with persona. This approach mitigated conversational inconsistencies and facilitated logical utterance as a result. We also conducted a qualitative evaluation, which resulted in our agent being perceived as more human-like compared to an unmodified LLM. The agent is freely available for contributing to advance the research in the field of Werewolf game.</abstract> | ||
<url hash="5fbbb6e4">2024.aiwolfdial-1.7</url> | ||
<attachment type="Supplementary_Attachment" hash="c84e91a7">2024.aiwolfdial-1.7.Supplementary_Attachment.pdf</attachment> | ||
<bibkey>sato-etal-2024-implementation</bibkey> | ||
</paper> | ||
</volume> | ||
</collection> |
Oops, something went wrong.