Skip to content

EventInfer is a real-time inference engine written entirely in C++ that leverages an event-driven architecture to maximize performance. Designed to handle AI workloads efficiently, it uses multithreading to reduce latency and improve scalability in real-time applications.

License

Notifications You must be signed in to change notification settings

Dam930/EventInfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EventInfer

EventInfer is a real-time inference engine written entirely in C++ that leverages an event-driven architecture to maximize performance. Designed to handle AI workloads efficiently, it uses multithreading to reduce latency and improve scalability in real-time applications.

Key Features

  • Event-driven architecture for efficient inference management.
  • Multithreading support to maximize performance across CPU cores.
  • Optimized for real-time applications, minimizing latency.
  • Lightweight and modular, easy to integrate into existing AI pipelines.

Purpose

The goal of EventInfer is to provide a fast and lightweight infrastructure for real-time inference tasks, making it ideal for applications real time applications. Unlike proprietary solutions such as NVIDIA DeepStream, which is tightly integrated with NVIDIA hardware and ecosystems, EventInfer is designed to be flexible and adaptable across various platforms and environments.

Prerequisites

Install packages

inference-cpp

Build

mkdir build
cd build
cmake ..
make

Run

./artificialy_anomaly_detection ../config/run-app.json 

Contributions

Contributions are welcome! Feel free to fork the repository, create issues, or submit pull requests to enhance the project.

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

EventInfer is a real-time inference engine written entirely in C++ that leverages an event-driven architecture to maximize performance. Designed to handle AI workloads efficiently, it uses multithreading to reduce latency and improve scalability in real-time applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •