AMYCO is a project dedicated to building neural network-based tools for detecting and classifying mushrooms. The project includes a suite of Python scripts and shell utilities to preprocess data, train models, and deploy the system on a Raspberry Pi.
- Overview
- File Descriptions
- Installation and Setup
- Steps to Set Up Automatic Environment on Raspberry Pi
- Contributors
- License
AMYCO leverages YOLO and FasterViT architectures to process images of mushrooms for detection and classification tasks. The pipeline includes tools for downloading datasets, preprocessing images, augmenting data, and training models. This README provides a little description of each file and instructions for setting up and running the system.
YOLO11 training settings.
Converts segmentation data to bounding box format for use in detection models.
Uses a YOLO model to crop images, preparing them for training with FasterViT.
A shell script to download CSV files from the Mushroom Observer website using links stored in links.txt
.
Defines the amyco
Conda environment, listing all dependencies required to run the project.
Trains the FasterViT model using the prepared dataset.
Downloads images from the Mushroom Observer site. It filters out deprecated species and keeps the shortest name among synonyms.
Contains links to CSV files used by csv_downloader.sh
.
Performs mixup augmentation within a species, increasing the number of images by 10%.
Checks the uniqueness of images by comparing MD5 hashes, removing duplicates to ensure data integrity.
The main script for running the AMYCO pipeline on a Raspberry Pi 400.
Calculates the median dimensions of images in the dataset for better model configuration.
Splits the dataset into training (80%) and validation (20%) folders.
Generates a table comparing the accuracies of various works, including this project’s results.
Converts all YOLO class labels in the dataset to a single class (mushroom).
-
Clone the repository:
git clone https://github.com/Penzo00/AMYCO-Robot.git cd AMYCO-Robot
-
Install Miniconda (if not already installed):
wget https://repo.anaconda.com/miniconda/Miniconda3-py310_24.9.2-0-Linux-aarch64.sh -O ~/miniconda.sh bash ~/miniconda.sh -b -p $HOME/miniconda3 rm ~/miniconda.sh echo 'export PATH="$HOME/miniconda3/bin:$PATH"' >> ~/.bashrc source ~/.bashrc conda init
-
Create and activate the
amyco
environment:conda env create -f environment.yaml conda activate amyco
-
Add automatic environment activation:
echo 'conda activate amyco' >> ~/.bashrc source ~/.bashrc
-
Download and install Miniconda:
wget https://repo.anaconda.com/miniconda/Miniconda3-py310_24.9.2-0-Linux-aarch64.sh -O ~/miniconda.sh bash ~/miniconda.sh -b -p $HOME/miniconda3 rm ~/miniconda.sh echo 'export PATH="$HOME/miniconda3/bin:$PATH"' >> ~/.bashrc source ~/.bashrc conda init
-
Create the Conda environment:
conda env create -f environment.yaml conda activate amyco
-
Enable HDMI audio output (optional):
sudo nano /boot/firmware/config.txt
Uncomment and edit the following lines:
hdmi_group=1 hdmi_mode=1 hdmi_drive=2
-
Reboot the Raspberry Pi to apply changes.
-
Be sure to download Paola and Cory ONNX and JSON files in https://huggingface.co/rhasspy/piper-voices/tree/main.
-
Download the missing Linux packages.
- Chiara D'Amato
- Edoardo Torre
- Lorenzo Vergata
This project is licensed under the MIT License. See the LICENSE file for details.