You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is possible to leak a service. When replicating a Server, we strongly retain it under the assumption that the remote side will successfully reify the Server into a Client.
If we replicate a Service, in the transitive closure, before the Client would be created, we can end up having a strongly-retained Server without an associated Client. This would mean the Server would never be subject to garbage collection.
This situation is recoverable if the Server is later part of a successful replication.
The text was updated successfully, but these errors were encountered:
We send a list of Service instances that need to be reified. When a Server is replicated, we strongly retain it, assuming a Client will be successfully reified remotely. If an error occurs, for instance, a Service is not compatible remotely, and this happens before the associated Client was reified, we will abort the process an a Client will never have been created.
It isn't as simple as sending a packet back in those cases. We need to retain information about whether the Service had previously been replicated or not. If so, we don't want to weakly retain a Server. If not, we may have already created the Client remotely.
The replication process needs to be made more sophisticated to properly handle these cases.
It is possible to leak a service. When replicating a Server, we strongly retain it under the assumption that the remote side will successfully reify the Server into a Client.
If we replicate a Service, in the transitive closure, before the Client would be created, we can end up having a strongly-retained Server without an associated Client. This would mean the Server would never be subject to garbage collection.
This situation is recoverable if the Server is later part of a successful replication.
The text was updated successfully, but these errors were encountered: