-
Sven Verdoolaege authored
We usually allocate a large number of small objects, but occasionally we may allocate one or more big objects. The original caching code could reuse a cached huge object for a small allocation. This is fine if the object would grow later, but if it remains small, then we may end up consuming and wasting a lot of memory, especially if these objects are long-lived. Now we only a memory block if it is at most twice as big as the desired size. This may result in some large objects sticking around in the cache, so we evict objects from the cache after a while. Finally, we don't reuse any cache elements for initially zero sized allocations, but instead check the cache when the objects first growns to a non-zero size. Signed-off-by:
Sven Verdoolaege <skimo@kotnet.org>
Sven Verdoolaege authoredWe usually allocate a large number of small objects, but occasionally we may allocate one or more big objects. The original caching code could reuse a cached huge object for a small allocation. This is fine if the object would grow later, but if it remains small, then we may end up consuming and wasting a lot of memory, especially if these objects are long-lived. Now we only a memory block if it is at most twice as big as the desired size. This may result in some large objects sticking around in the cache, so we evict objects from the cache after a while. Finally, we don't reuse any cache elements for initially zero sized allocations, but instead check the cache when the objects first growns to a non-zero size. Signed-off-by:
Sven Verdoolaege <skimo@kotnet.org>
Loading