Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lab 10 Peer Review #9

Open
LeoDardanello opened this issue Jan 6, 2024 · 0 comments
Open

Lab 10 Peer Review #9

LeoDardanello opened this issue Jan 6, 2024 · 0 comments

Comments

@LeoDardanello
Copy link

Regarding your code for the Lab 10 i've noticed some particular traits:

  • Code Cleaness: the code is well suddivided into cells that covers a specific argoument or function, comments explains properly the various parameters and goals.

  • Ai Training: the idea of creating two Agents, one when the Agents is using X and one when the Agent is using O, has its pros and cons. For the pros the reward process gets easier because you're only considering a single symbol, also the Q-table dimension for a single Agent is smaller compared to Q-table for an Agent that can handle both symbol. The main cons is the high specialization of each single Agent, meaning that a single Agent won't be able to play a sufficient game with X and O.

  • Statistics: It could have been useful to print or plot the dimension of the Q-table for each Agent

  • Interactive Interface: in my implementation i also made the user play against a Trained Agent, unfortunately i encountered some glitches maybe caused by the way i was representing the grid. I'll try to change my implementation in order to solve said glitches using your interactive_interface() function as guide.

Overall your code shows a well understanding of the Q-learning topic. Good job! Keep it up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant