The customers dropped their money and went on a rampage to find who stole it. This is the theme of this project where we designed an intelligent robot agent inspired by Mr. Krabs from SpongeBob SquarePants that collects money dropped by customers at the Krusty Krab restaurant. Mr.Krabs and his robot clone need to gather the money quickly before the angry customers find him. To achieve this, our agents map their environments, implement key AI path-planning algorithms, and account for dynamic rerouting, obstacle avoidance, and multi-agent coordination.
- Introduction
- Objectives
- Environment Design
- Design Approach
- Implementation Details
- Challenges and Solutions
- Testing and Results
- Installation
This project simulates Mr. Krabs and his robot clone collecting money in the Krusty Krab restaurant while avoiding angry customers. The agents must map their environment, implement path-planning algorithms, and coordinate to efficiently gather the money while avoiding dynamic obstacles.
- Implement inverse range sensor algorithm for environment mapping
- Develop dynamic navigation using A* and Dijkstra's algorithms
- Handle real-time dynamic obstacle avoidance
- Create inter-agent communication protocol for task coordination
- Implement value-based prioritization for target collection
The environment is designed as a lore-accurate replica of the Krusty Krabs restaurant, featuring:
- Custom-designed assets using Ibis Paint X
- Dynamic obstacles (angry customers)
- Static obstacles (tables, chairs, walls)
- Environment dimensions: 2616 × 1816 pixels (scaled by 2.2)
- Cell-based grid system for navigation
- 2D occupancy grid representation
- Inverse sensor model for probability calculation
- Real-time grid updates based on robot sensing
- Visual feedback of explored areas and obstacles
- Heuristic-based pathfinding
- Dynamic obstacle avoidance capabilities
- Priority queue implementation for path exploration
- Real-time path recalculation
- Complete path search implementation
- Cost-based path optimization
- Dynamic environment handling
- Obstacle avoidance integration
- Dynamic obstacle movement patterns with 90-degree rotations
- Random direction changes for unpredictability
- Static obstacle detection and avoidance
- Agent sensing with configured maximum range
- Priority-based target assignment
- Cost-based path optimization
- Multi-agent workload distribution
- Coordinated multi-agent operation
- Reward division system
- Collision avoidance between agents
- Python with PyGame for simulation
- NumPy for occupancy grid management
- Priority Queues for pathfinding algorithms
- Custom asset design with Ibis Paint X
- Robot class for agent behavior
- Environment mapping system
- Pathfinding algorithm implementations
- Dynamic obstacle management
- Multi-agent coordination system
Challenge | Solution |
---|---|
Slow mapping | Optimized agent speed and sensing range parameters |
Agent getting stuck | Implemented retry mechanism with maximum attempts |
Agent collisions | Treated other agents as obstacles in pathfinding |
Performance comparison between A* and Dijkstra's algorithms in different scenarios:
Scenario | A* (seconds) | Dijkstra (seconds) |
---|---|---|
Normal (4 obstacles) | 12.29 | 11.74 |
High Traffic (8 obstacles) | 14.08 | 13.76 |
Emergency (12 obstacles) | 16.00 | 16.27 |
Results indicate that A* performs slightly better in most cases, while Dijkstra's algorithm shows marginal advantages in high-complexity scenarios.
Make sure you have Python installed.
Follow these steps to set up the environment and run the application:
-
Clone the Repository:
git clone https://github.com/Sambonic/krusty-krabs-navigator
cd krusty-krabs-navigator
-
Create a Python Virtual Environment:
python -m venv env
- Activate the Virtual Environment:
-
On Windows:
env\Scripts\activate
-
On macOS and Linux:
source env/bin/activate
- Ensure Pip is Up-to-Date:
python.exe -m pip install --upgrade pip
-
Install Dependencies:
pip install .
Or simply run the pip install line in the notebook