Live-Vision is an advanced web application designed for real-time object detection, tracking, and classification Live-Vision provides robust capabilities for monitoring environments through video streams, recording footage, and triggering sound alerts based on detected objects.
- Real-Time Object Detection: Detect objects in real-time using the CoCo SSD model with MobileNetV2 from TensorFlow.js.
- Object Tracking: Track detected objects dynamically as they move across the camera’s field of view.
- Object Classification: Classify detected objects with high accuracy using pre-trained models.
- Video Recording: Record live video streams and save them for later review.
- Sound Alerts: Trigger sound alerts based on specific object detections for real-time notifications.
- Next.js: A React framework used for building the front end and server-side rendering.
- Tailwind CSS: A utility-first CSS framework for styling the application.
- ShadCN UI: A modern UI component library for building responsive and interactive interfaces.
- MySQL: A relational database management system used for storing user data, recorded videos, and detected objects.
- NextAuth: A complete open-source authentication solution for Next.js applications, used here for managing user sessions.
- TensorFlow.js (CoCo SSD with MobileNetV2): A machine learning model used for object detection and classification.
To get started with Live-Vision, follow these steps:
-
Clone the repository:
git clone https://github.com/your-username/live-vision.git cd live-vision
-
Install dependencies:
npm install
-
Set up the environment variables:
Create a
.env.local
file in the root of your project and add the following variables:DATABASE_URL=mysql://username:password@localhost:3306/livevision NEXTAUTH_URL=http://localhost:3000 NEXTAUTH_SECRET=your_secret_key
-
Run database migrations:
npx prisma migrate dev
-
Start the development server:
npm run dev
The application will be available at
http://localhost:3000
.
- Login/Register: Use the authentication system provided by NextAuth to create an account and log in.
- Start Object Detection: Access the object detection interface, where you can start real-time detection and tracking.
- Review Recordings: View, manage, and download recorded videos from your sessions.
- Configure Alerts: Set up sound alerts for specific object detections to be notified in real time.
If you want to contribute to Live-Vision, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature/YourFeature
). - Commit your changes (
git commit -m 'Add new feature'
). - Push to the branch (
git push origin feature/YourFeature
). - Open a pull request.
Feel free to modify this project as needed!😊 Happy coding!😊