Skip to content

Penzo00/AMYCO-Robot

Repository files navigation

A-MYCO: Prototyping the Eyes of a Myco-Robot

AMYCO is a project dedicated to building neural network-based tools for detecting and classifying mushrooms. The project includes a suite of Python scripts and shell utilities to preprocess data, train models, and deploy the system on a Raspberry Pi.


Table of Contents

  1. Overview
  2. File Descriptions
  3. Installation and Setup
  4. Steps to Set Up Automatic Environment on Raspberry Pi
  5. Contributors
  6. License

Overview

AMYCO leverages YOLO and FasterViT architectures to process images of mushrooms for detection and classification tasks. The pipeline includes tools for downloading datasets, preprocessing images, augmenting data, and training models. This README provides a little description of each file and instructions for setting up and running the system.


File Descriptions

args.yaml

YOLO11 training settings.

converter.py

Converts segmentation data to bounding box format for use in detection models.

cropper.py

Uses a YOLO model to crop images, preparing them for training with FasterViT.

csv_downloader.sh

A shell script to download CSV files from the Mushroom Observer website using links stored in links.txt.

environment.yaml

Defines the amyco Conda environment, listing all dependencies required to run the project.

fastervit_training.py

Trains the FasterViT model using the prepared dataset.

get_images.py

Downloads images from the Mushroom Observer site. It filters out deprecated species and keeps the shortest name among synonyms.

links.txt

Contains links to CSV files used by csv_downloader.sh.

mixup.py

Performs mixup augmentation within a species, increasing the number of images by 10%.

remove_duplicates.py

Checks the uniqueness of images by comparing MD5 hashes, removing duplicates to ensure data integrity.

run.py

The main script for running the AMYCO pipeline on a Raspberry Pi 400.

size.py

Calculates the median dimensions of images in the dataset for better model configuration.

splitter.py

Splits the dataset into training (80%) and validation (20%) folders.

table.py

Generates a table comparing the accuracies of various works, including this project’s results.

update_label_file.py

Converts all YOLO class labels in the dataset to a single class (mushroom).


Installation and Setup

  1. Clone the repository:

    git clone https://github.com/Penzo00/AMYCO-Robot.git
    cd AMYCO-Robot
  2. Install Miniconda (if not already installed):

    wget https://repo.anaconda.com/miniconda/Miniconda3-py310_24.9.2-0-Linux-aarch64.sh -O ~/miniconda.sh
    bash ~/miniconda.sh -b -p $HOME/miniconda3
    rm ~/miniconda.sh
    echo 'export PATH="$HOME/miniconda3/bin:$PATH"' >> ~/.bashrc
    source ~/.bashrc
    conda init
  3. Create and activate the amyco environment:

    conda env create -f environment.yaml
    conda activate amyco
  4. Add automatic environment activation:

    echo 'conda activate amyco' >> ~/.bashrc
    source ~/.bashrc

Steps to Set Up Automatic Environment on Raspberry Pi

  1. Download and install Miniconda:

    wget https://repo.anaconda.com/miniconda/Miniconda3-py310_24.9.2-0-Linux-aarch64.sh -O ~/miniconda.sh
    bash ~/miniconda.sh -b -p $HOME/miniconda3
    rm ~/miniconda.sh
    echo 'export PATH="$HOME/miniconda3/bin:$PATH"' >> ~/.bashrc
    source ~/.bashrc
    conda init
  2. Create the Conda environment:

    conda env create -f environment.yaml
    conda activate amyco
  3. Enable HDMI audio output (optional):

    sudo nano /boot/firmware/config.txt

    Uncomment and edit the following lines:

    hdmi_group=1
    hdmi_mode=1
    hdmi_drive=2
  4. Reboot the Raspberry Pi to apply changes.

  5. Be sure to download Paola and Cory ONNX and JSON files in https://huggingface.co/rhasspy/piper-voices/tree/main.

  6. Download the missing Linux packages.


Contributors

  • Chiara D'Amato
  • Edoardo Torre
  • Lorenzo Vergata

License

This project is licensed under the MIT License. See the LICENSE file for details.