- User approaches the station with bags of waste containing textile, e-waste, paper, cardboard, glass, plastics, and metal.
- The station welcomes the user via an audio prompt and display shows sorting categories and basic instructions.
The station’s camera with AI image recognition will scan each item as the user presents it in front of the camera or places it on the sorting surface.
- User presents the item by holding it under the camera or placing it in the General Item Sorting Table.
- Camera and AI system analyze the object, identifying its shape, material, and category (plastic, metal, textile, etc.).
- If the item is ambiguous, the system may request the user to place it in the designated area for further analysis or to use additional sensors.
- The station provides feedback after each item is sorted, confirming the correct action (e.g., "Thank you for sorting metal!" or "Please remove the cap from this bottle").
- If the system is uncertain about the material (e.g., complex electronics), it may ask the user to confirm the item type or offer additional guidance.
- Once all items have been sorted, the station thanks the user and provides a summary of the sorted waste (optional: show environmental impact) and rewards user with eco/green points in Pfand-like scheme, in parallel it also sends data to the central server about amount and types of collected resources.
- Optional Feedback: The user could be invited to leave feedback on the process
At any moment, if the user feels stuck or doesn't understand, he/she can simply ask the machine about the process or next step, the system will listen to that question through microphones, record the question, and send it to either AGI or human operator, receive the answer and show it on display and voice it through speakers.