This is a question for PHP/Laravel + Redis, but I'm sure it can be extrapolated to other languages/frameworks.
I'm working on an app that displays a results (either from a user initiated search or category listing). By default, we paginate 30 results per page.
I use Redis to cache all the results, but I've run into a couple of problems optimizing. At first, I cached the entire result set with the objects (Products) fully stored in the result set (so basically each list of results was one giant cache entry of 30 data objects). This was fine but memory usage spiked as objects were stored in multiple different cached objects (a product could appear in multiple search results as well as categories -- and then each object would be individually cached by default).
Also, the other problem is, since we allow different pagination at different counts, we have to cache at other object counts as well.
So then what I tried next was caching just a list of object ids for each page. This decreased memory usage significantly, however each time a page was loaded, we would have to loop through the 30 objects and retrieve them from cache and then recreate them. At approximately 50 ms per object (which seems high), it can add up to 1.5 seconds to page load. Even if we further optimize the object creation, it will still be a concrete amount of time being added to page load / rendering.
Our next foray is at HTML caching (Cloudflare/Varnish, etc) which will require us to redesign certain aspects of the app, which is fine. However, to me I'm wondering without HTML cache is their way to optimize this (or what is the optimal method in doing what we're trying to do?). Also, the other issue I have is while I know PHP scripts are executed at every request, why can we not maintain objects in between executions as POPO (Plain Old PHP Objects)? It seems silly to me at this point that we're still serializing and deserializing objects in 2017. I'd love to have a background PHP app that maintains the objects required and is able to pass them to each script as required.
For instance, a Product will not change much from page load to page load. Why recreate the same Product object hundreds of times a minute -- even if it is from cache?
I'd love to have a background PHP app that maintains the objects required and is able to pass them to each script as required.
You can't do that, because separate applications don't share scope (they don't even share memory). So your background app would need to pass the objects in some intermediate form. Yep, you guessed it. Serialization.
What you can do is separate some services or the HTML generated part so that they are completely associated with a specific object only. At that point you might be able to have a background application that runs Object Brokering. An object is rendered by calling the broker with the object ID and asking for its HTML, and the broker can cache either the object or the rendered HTML depending on what's best.
The HTML not needing serialization, this method should be quite efficient.
You might even be able to do this through AJAX: an object is "pre-rendered" as an AJAX placeholder with appropriate size and "please wait" animation
<div class="Placeholder Product"></div>
and then a series of parallel jQuery calls animates the objects.
I did this (for a pagination thingy, go figure) and got very good results with a crazy animation that started as a terribly unfocused search result that gradually came into focus (no cycling of course). This happened so slowly that meanwhile the AJAX call got resolved, and the real object was blended in. The objects were so similar that you only rarely spotted that the unfocused image and the final object weren't actually related, and the user on average didn't notice that the objects past the first two or three hadn't rendered yet - he saw the whole results page appear instantly and thought that all results were there. The objects were also cached locally, so paging back and forth didn't stress the server more than once.