You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been creating this module for node.js shm-typed-lru for a shared cache on really small machines. I put hopscotch into shared memory and it mainly maps a hash to an offset in the LRU, a doubly linked list that resides in a shared section with a pre-allocated free list. So, this is for a few processes sharing tokens on things like ZeroPis. The aim is for small traffic, maybe up to 100K sessions recorded in one box, 1K connections per node.js service max.
It is in some sense a stopgap for some other yet to be developed token share. It seems like a good idea to have the closely operating processes, ones that verify with each other often on a box with minimal communication for looking up stuff.
In a grander scheme, the little boxes would occasionally burst out across some global range to verify session tokens or to start clustering some more active remote interactions. So, there might be some throughway systems which are build out in Redis size (not a small footprint).
But, for the moment, I dropped my module into a derived communication node which is part of a JSON network. It's nice to use node.js, since I can solve module issues fairly easily. Yet, node.js is not an endpoint for stability per se. Some guy in Florence,It. wrote a nice C++ JSON interchange that outperforms anything else. It would be nice to embed a true in memory cache in that. Currently, I went past some of my design goals to set up a small interchange which can capture pub/sub messages on the way to secondary caches boxes (little boxes offering some small amount of redundancy). The capture drops the token into a store on the interchange in case a fresh service, nearby, goes looking for a token outside its box.
So, the nice thing will be to swap out the current stopgap table with a better one by changing the npm dependencies (hence the C++ code is part of the process). At a later time, a new C++ wrapper, replacing the nodejs, could go around the highly performant table.
These alterations of the runtime might go along with hardware changes. I have seen some GPU hash tables (not sure how realistic those are). Will optronics change how we hash or how we allocate memory? Would the same pub/sub messages still make sense?
At any rate, I have to get something running or just wash out. No paying work at the moment. The sessions will be established by e-curve derived keys from interfaces that I am also testing. Damnable distractions like optronic chips recognizing images at the speed of light https://www.nature.com/articles/s41586-022-04714-0. Must test some code instead of being day dreamy.
What sort of easy modularity could I find here? Or, is it possible to contribute such modularity?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Just stopping by to say hello.
I have been creating this module for node.js shm-typed-lru for a shared cache on really small machines. I put hopscotch into shared memory and it mainly maps a hash to an offset in the LRU, a doubly linked list that resides in a shared section with a pre-allocated free list. So, this is for a few processes sharing tokens on things like ZeroPis. The aim is for small traffic, maybe up to 100K sessions recorded in one box, 1K connections per node.js service max.
It is in some sense a stopgap for some other yet to be developed token share. It seems like a good idea to have the closely operating processes, ones that verify with each other often on a box with minimal communication for looking up stuff.
In a grander scheme, the little boxes would occasionally burst out across some global range to verify session tokens or to start clustering some more active remote interactions. So, there might be some throughway systems which are build out in Redis size (not a small footprint).
But, for the moment, I dropped my module into a derived communication node which is part of a JSON network. It's nice to use node.js, since I can solve module issues fairly easily. Yet, node.js is not an endpoint for stability per se. Some guy in Florence,It. wrote a nice C++ JSON interchange that outperforms anything else. It would be nice to embed a true in memory cache in that. Currently, I went past some of my design goals to set up a small interchange which can capture pub/sub messages on the way to secondary caches boxes (little boxes offering some small amount of redundancy). The capture drops the token into a store on the interchange in case a fresh service, nearby, goes looking for a token outside its box.
So, the nice thing will be to swap out the current stopgap table with a better one by changing the npm dependencies (hence the C++ code is part of the process). At a later time, a new C++ wrapper, replacing the nodejs, could go around the highly performant table.
These alterations of the runtime might go along with hardware changes. I have seen some GPU hash tables (not sure how realistic those are). Will optronics change how we hash or how we allocate memory? Would the same pub/sub messages still make sense?
At any rate, I have to get something running or just wash out. No paying work at the moment. The sessions will be established by e-curve derived keys from interfaces that I am also testing. Damnable distractions like optronic chips recognizing images at the speed of light https://www.nature.com/articles/s41586-022-04714-0. Must test some code instead of being day dreamy.
What sort of easy modularity could I find here? Or, is it possible to contribute such modularity?
Beta Was this translation helpful? Give feedback.
All reactions