Skip to content

Chen-Yuan-Lai/FalconEye

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Say Goodbye to your sly bug.

What's FalcoEye

FalconEye is a developer-first error tracking platform that help developers to continuously manage and analyze errors logs about applications.

Outline

Features

Capture error logs with custom-developed SDK

  • Encapsulate user validation and error data upload APIs into user-friendly SDK functions.
  • users have the flexibility to capture specific error logs for the function selectively, or capture them in the entire Express application using middleware.

Automate source map file uploads via custom-developed CLI tool

  • A simple interactive command-line user interface is provided, where setting up the configuration requires answering only a few questions.
  • Source map files, are automatically built from the source code and uploaded by GitHub Actions. wizard GitHub Actions

Personalized analytics dashboard

  • Simplify and organize error logs by classifying them into distinct issues to reduce duplicate information.
  • Time-series-based plots for errors and alerts help users manage projects more effectively. dashboard

Source code location mapping

  • Provided a precise code block for each error, enabling users to easily locate and address bugs. codeBlock

Customized alert system

  • Provided various rules, threshold, and checking interval, users can create application-specific alerting tailored to their specific needs. alert

Getting Started

  1. Sign up an account in FalconEye, or or you can use the test account below:

    Testing Account
    Email a186235@gmail.com
    Password 1234
    • you can look some example issues and dashboard in the account.
    • Trying to install the example project and experience how FalconEye capturing errors.
    • If you want to capture errors by yourself, following steps below.
  2. Create a project on projects page, and:

    • Get user key & client token

      Get user key Get client token
  3. Set up FalconEye SDK in your application runtime

    • Install

      npm i @falconeye-tech/sdk
      
    • Configure

      import fe from "@falconeye-tech/sdk";
      
      const er = new fe();
      
      await er.init({
        apiHost: "https://handsomelai.shop",
        userKey: "",
        clientToken: "",
      });
    • Usage

      app.use(er.requestHandler());
      
      app.get("/typeError", async (req, res, next) => {
        try {
          console.logg("Hi");
        } catch (e) {
          next(e);
        }
      });
      
      // ... routes
      
      app.use(er.errorHandler());
      
      // Global error handler
      app.use((err, req, res, next) => {
        res.status(error.statusCode).json({
          error: error.message,
        });
      });
  4. Built and upload source map file by FalconEye wizard(optional)

    If you want to know actual location in the source codes about the errors, you need to do steps below:

    • Run wizard
    npx @falconeye-tech/wizard wizard
    

Architecture

System design

To enhance system scalability and fault tolerance, FalconEye implemented a strategy of decomposing its monolithic server into multiple services, with each one focusing on a specific, simple functionality.

  1. Gateway server: Its role is not only to receive and authenticate client requests, but also as a producer, enqueuing the corresponding tasks into Apache Kafka.

  2. Kafka Service: Serves as a mediator between the Gateway Server and other services, storing error log data and tasks for scheduled checks of alert rules.

  3. Event service: a consumer to pull and process event data from Kafka

  4. Notification service: a consumer to pull and check alert rule from Kafka

Features

  • Enhanced the scalability by decomposing notification and event process features into distinct Fastify services, optimizing resource allocation.
  • Improved system scalability by setting up AWS EventBridge and Lambda for automated task handling and alert checks in the notification service.
  • Optimized system efficiency and job handling capacity by integrating Apache Kafka for asynchronously processing high-throughput event and notification tasks.
  • All services in the project are containerized using Docker, increasing system portability.

Load test

To test the system's stability under long-term data uploading and writing scenarios, I used K6 to conduct the following test plan.

Test program

Warming up phase: increase to 100 virtual users in 1 minute.

Phase 1: maintain 100 virtual users in 3 minutes.

Increase load: increase to N virtual users in 1 minute.

Phase 2: maintain N virtual users in 3 minutes.

Cooling down phase: increase to N virtual users in 1 minute.

Result

VUs(p1/p2) machine type (gateway/ kafka) failed RPS gateway server CPU usage (min/max) consumer+kafka CPU usage (min/max)
100/400 t2.micro/t2.small 0.00% 151 70/100 40/92
100/400 2 * t2.micro/t2.small 0.00% 160 50/100 45/100
100/600 t2.micro/t2.small 2.42% 131 70/100 40/100
100/600 t3.micro/t2.small 0.16% 198.5 70/100 40/100
100/600 t3.micro/t2.small NA NA 50/- 40/-

Monitoring

monitor

  • Attained high availability by Monitoring system (CPU, RAM, and disk usage) and specific (Kafka) metrics for each EC2 instance via Prometheus and Grafana servers.

schema

table schema

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages