Allow to use a shared cache between multiple instance of DeviceDetector#10
Allow to use a shared cache between multiple instance of DeviceDetector#10joelwurtz wants to merge 4 commits intosimplecastapps:mainfrom
Conversation
|
Hum, forget there was an ffi export of this lib, not sure it works well with future then, there is solution for that, just close this then if this is not a wanted behavior (but can do change also) |
|
I saw you make this change to the cache from your fork some time ago and I almost reached out out of curiosity. I like the change in general, but using an async moka cache means that every function that does anything has to become async, even if you don't use a cache in the first place. It is not clear to me that there are any benefits to it either, as the sync moka is thread safe and very fast, it is unlikely to be a bottleneck. Did you find otherwise? get-size is interesting and that's a good feature to add. I had wondered if there was a way to automatically choose a cache entry size. |
I guess we could use that, it's just that i'm using this lib in a async context and it's generally better to avoid blocking in those case, but can make the changes |
Not sure you want this, but we have done this in our fork to match our behavior.
We mainly use this library in a binary on our server that read logs from a message queue and parse them, including device detection using this library.
We have multiple threads running it to parallelize the process of logs parsing, and since this library consume most of the cpu (no blaming here, i totally understand why it's slow, but it's a fact). we use a specific device detector in each thread (so other threads are not blocked by it).
In order to avoid having a specific cache for each thread we added this code, so even if we have multiple instance of device detector they all use the same cache and it avoid consuming too much memory for the same data.
We also added get_size library that allows to correctly guess the size of a cache structure for moka, and then correctly set limit for our cache in terms of memory and not in terms of items