Skip to content

14 ZInx RequestPool

刘丹冰 edited this page Apr 29, 2024 · 1 revision

1. Object Pool

sync.Pool

Starting from Go 1.3, the language introduced a mechanism for object reuse, namely sync.Pool. sync.Pool is scalable and concurrently safe, with its size limited only by the memory available. It is used to store values that have been allocated but not yet used, and may be used in the future. This avoids the need for repeated memory allocation, allowing direct reuse of existing objects, reducing the pressure on garbage collection, and thus improving system performance. Details can be found in the official documentation.

sync.Pool can be used to store a large number of transient objects that are repeatedly used. In Zinx, every time a route is triggered, a Request object (hereinafter referred to as a request object) is produced. Except for the inconsistent connection information and context information generated in the route, the request object is identical in structure. After all route functions have been executed, it will not be passed on and will be reclaimed by Go's garbage collector. This fits perfectly with the definition of a large number of transient, reusable objects.

In the latest version of Zinx code, support for enabling object pool mode to store these request objects has been added without any changes to the previous usage methods. However, this also leads to some implicit issues. The benefit is obvious: object reuse eliminates the need to repeatedly create and discard objects, reducing the pressure on garbage collection. As for the implicit issues, they may exist in specific route implementations, as detailed below.

2. Using Request Pool Mode in Zinx

Zinx provides object pool selection as a switch. Simply configure it when starting the service, and what could be more wonderful than this? When the pooling mode is enabled, whether a request is generated before or after the route is executed, Zinx will manage the retrieval and return of objects to and from the pool.

// Enable Request object pool mode
server := znet.NewUserConfServer(&zconf.Config{RequestPoolMode: true})

The specific implementation of the object pool in Zinx is quite simple, using the sync.Pool from the standard library as mentioned above.

// Global pool object
var RequestPool = new(sync.Pool)

// Initialize the construction method of the pool (when there are no objects in the pool)
func init() {
    RequestPool.New = func() interface{} {
        return allocateRequest()
    }
}

// Method to obtain a request object, called before executing the route function
func GetRequest(conn ziface.IConnection, msg ziface.IMessage) ziface.IRequest {
    // Determine whether to use the object pool based on the current mode
    if zconf.GlobalObject.RequestPoolMode {
        // Get a Request object from the pool; if there are no available objects in the pool, allocate a new one using the allocateRequest function
        r := RequestPool.Get().(*Request)
        // Regardless of whether the retrieved Request object already exists or is newly constructed, it should be initialized before returning for use
        r.Reset(conn, msg)
        return r
    }
    // If object pooling mode is not enabled, simply return a new object
    return NewRequest(conn, msg)
}

3. Differences from Before

Since objects need to be retrieved from and returned to the pool, what's the point if a pool is filled with objects that are never Put back? As mentioned earlier, Zinx manages both retrieval and returning of objects to the pool. The timing of retrieval is obvious, before the route starts. However, what about returning? Thanks to the design of routes in Zinx, routes are abstract functions that continuously execute and pass information. After a route function has been executed, returning the object to the pool is straightforward.

func (mh *MsgHandle) doMsgHandlerSlices(request ziface.IRequest, workerID int) {
    // Recovery mechanism for potential panics
    defer func() {
        if err := recover(); err != nil {
            zlog.Ins().ErrorF("workerID: %d doMsgHandler panic: %v", workerID, err)
        }
    }()

    msgId := request.GetMsgID()
    handlers, ok := mh.RouterSlices.GetHandlers(msgId)
    if !ok {
        zlog.Ins().ErrorF("api msgID = %d is not FOUND!", request.GetMsgID())
        return
    }

    request.BindRouterSlices(handlers) // Bind the route
    request.RouterSlicesNext() // Execute the route
    // After execution is complete, return the Request object to the object pool
    PutRequest(request)
}

Magic comes with a price. While Zinx helps you reclaim objects automatically, it might not always be at the most appropriate time. What if you need to start a new goroutine within a route function to perform some business logic, and that goroutine requires access to the request context or needs to pass the request object? This is where the implicit issues mentioned earlier come into play.

3.1 Non-Pooling State

Before, usage was straightforward. We wrote route functions and registered them with Zinx. When a client sent a request, it was executed normally. You may find the NoPoll4 function strange. Shouldn't different routes be independent?

func NoPoll1(request ziface.IRequest) {
    request.Set("num", 1)
    go NoPoll2(request)
}

func NoPoll2(request ziface.IRequest) {
    // Read after a delay
    time.Sleep(time.Second * 3)
    get, _ := request.Get("num")
    fmt.Printf("num:%v \n", get)
}

func NoPoll4(request ziface.IRequest) {
    // Will not affect the original Request in non-pooling mode
    request.Set("num", 3)
}

func main() {
    // Without enabling Request object pooling mode
    server := znet.NewUserConfServer(&zconf.Config{RouterSlicesMode: true, TCPPort: 8999, Host: "127.0.0.1", RequestPoolMode: false})
    server.AddRouterSlices(1, NoPoll1)
    server.AddRouterSlices(2, NoPoll4)
    server.Serve()
    fmt.Println(znet.NumCreated) // 200
}

3.2 Pooling State

This is where the problem with pooling arises. The purpose of pooling is to reuse request objects as much as possible. Therefore, the request object you pass to a new goroutine may have been returned to the pool by Zinx after the previous group of routes was executed, and then retrieved and given a new value by a new request. Seeing this situation, you might wonder what happened to the promised automation? Isn't the timing appropriate for returning to the pool? If I manually returned the object to the pool, at least it wouldn't be tampered with. However, it is not impossible to handle this situation. When needing to pass a request object to a new goroutine, Zinx provides the Copy method. Copy will give you a new request body with the same context information as the original request object, but without the connection and route. The absence of route is easy to explain, as a request should execute a route only once, and it's not normal to start a new goroutine within a route to execute another route. But what about the connection information? Why not give it to me? First, the connection information is the same. The message should be sent during route execution, and starting a new goroutine to send messages does not affect the execution of the original route. If it is necessary to stop the execution of the original route goroutine, the Abort() method can be used. Connection information can be written into the context. As shown in the following code, it should only be stored when you are sure you need the connection.

func Poll1(request ziface.IRequest) {
    // If connection information is needed
    request.Set("conn", request.GetConnection())
    request.Set("num", 1)
    fmt.Printf("request 1 addr:%p,conn:%p \n", &request, request.GetConnection())

    // If a new thread is needed and context is also required, then the copy method needs to be called to create a copy
    cp := request.Copy()
    go Poll2(cp)

    // If the copy method is not used to copy the object, it may lead to inconsistent values even if it's the same object
    go Poll3(request)
}

func Poll2(request ziface.IRequest) {
    defer func() {
        if err := recover(); err != nil {
            // Handle panic
            fmt.Println(err)
        }

    }()
    get_conn, ok := request.Get("conn")
    if ok {
        // Accessing directly will cause a null pointer
        request.GetConnection().GetConnID()
        // The addresses of the printed Request objects are not the same
        conn := get_conn.(ziface.IConnection)
        fmt.Printf("request copy addr:%p,conn:%p \n", &request, conn)
        // conn.sendMsg()
    }
}

// If there are many requests, enabling object pooling and passing the Request without copying may result in inconsistent values
func Poll3(request ziface.IRequest) {
    time.Sleep(time.Second * 3)
    get, _ := request.Get("num")
    // If the object is passed directly and influenced by other routes, it may randomly print the modified value 3
    fmt.Printf("num:%v \n", get)
}

func Poll4(request ziface.IRequest) {
    // Affect the original request object
    request.Set("num", 3)
}

func main() {
    // Enable Request object pooling mode
    server := znet.NewUserConfServer(&zconf.Config{RouterSlicesMode: true, TCPPort: 8999, Host: "127.0.0.1", RequestPoolMode: true})
    server.AddRouterSlices(1, Poll1)
    server.AddRouterSlices(2, Poll4)
    server.Serve()
    fmt.Println(znet.NumCreated) // 9
}

In the above code, when running with a large number of requests in a short time, in non-pooling mode, the NoPoll2 function will print the context 1 as expected, while in pooling mode, it may randomly print 1 or 3, depending on whether the object has been modified at the time of printing.

4. Benefits of Pooling

With all this said, and despite the hidden risk, how much improvement can be gained? We can simply test it. Just add a counter when creating request objects (different construction methods will be used in different modes, and the object pool will call the construction method mentioned above when there are requests but no objects in the pool), then run in both modes, and send two hundred requests using the client mentioned above. Finally, check the count result.

// Used to count the actual number of instances created
var NumCreated int32

// Method for creating instances in the object pool
func allocateRequest() ziface.IRequest {
    // Atomically increment the counter when creating an instance
    atomic.AddInt32(&NumCreated, 1)
    
    req := new(Request)
    req.steps = PRE_HANDLE
    req.needNext = true
    req.index = -1
    return req
}

// Non-pooling code
func NewRequest(conn ziface.IConnection, msg ziface.IMessage) ziface.IRequest {
    // Atomically increment the counter when creating an instance
    atomic.AddInt32(&NumCreated, 1)
    
    req := new(Request)
    req.steps = PRE_HANDLE
    req.conn = conn
    req.msg = msg
    req.stepLock = sync.RWMutex{}
    req.needNext = true
    req.index = -1
    return req
}

Simply print the count when stopping the server in the non-pooling/pooling code. In very informal tests, a ratio of 9:200 pooling to non-pooling was observed. In a scenario with a large number of requests in a short time, pooling only created 9 object instances. Of course, this depends on the simplicity of the route function, but it also reduces a considerable amount of pressure on garbage collection.

// Non-pooling
func main() {
    // Enable Request object pooling mode
    server := znet.NewUserConfServer(&zconf.Config{RouterSlicesMode: true, TCPPort: 8999, Host: "127.0.0.1", RequestPoolMode: false})
    server.AddRouterSlices(1, NoPoll1)
    server.AddRouterSlices(2, NoPoll4)
    server.Serve()
    fmt.Println(znet.NumCreated) // 200
}

// Pooling
func main() {
    // Enable Request object pooling mode
    server := znet.NewUserConfServer(&zconf.Config{RouterSlicesMode: true, TCPPort: 8999, Host: "127.0.0.1", RequestPoolMode: true})
    server.AddRouterSlices(1, Poll1)
    server.AddRouterSlices(2, Poll4)
    server.Serve()
    fmt.Println(znet.NumCreated) // 9
}