Last week, Konstantin Kovshenin released Surge, a new page caching plugin for WordPress. It’s main feature is that it has no configuration settings. Zero. Simply install and activate the plugin. That sounded too good to be true, so I installed the plugin and did some quick testing.
This blog is currently hosted on a modest VPS server at Linode. It uses a lightweight WordPress theme and I’ve got NGINX set up to cache pages for 10 seconds using fastcgi_cache (“microcache”). This helps protect the server from traffic surges, but does not intelligently invalidate cache when content is updated. This makes it a poor option for real page caching. Fortunately, Surge works well as a second layer of caching, “behind” microcache.
I’ve previously used Batcache, WP Super Cache and W3 Total cache, and found most of them cumbersome to set up. In stark contrast, Surge is literally as easy as it gets. Install and activate. There isn’t even a settings screen.
To test how effective Surge is in my case, I loaded and refreshed a number of pages on this blog, and noted the “x-cache” headers to see if I hit NGINX’s cache or Surge. For each request, I recorded the TTFB value.
|No cache (ms)||NGINX cache (ms)||Surge cache (ms)|
Unfortunately, I was only able to test from one location, since I needed to manually inspect the request headers to see which cache the page was being returned from. It would appear that the “Time To First Byte” value for pages served from Surge’s cache are incredibly low. As far as I know, NGINX caches in memory, so it’s impressive to see Surge come so close.
These tests are far from scientific, but my first impression is that Surge is very, very fast. And considering how easy it is to use, this is probably going to be my recommened page caching plugin from now on.
Update: Some more tests
As requested by Konstantin in the comments, I ran some HTTP benchmarks using hey. The “no caching” test caused high Linux sysload values, whereas the other tests ran relatively smoothly. Here are the numbers of requests per second my server managed to handle. All requests returned status code 200 (success).
|Test setup||1 min test||5 min test|
|Surge + NGINX microcache||1277||1517|
There’s not a lot of difference between the results for microcache with and without Surge active. But please keep in mind that microcache only caches a page for a few seconds, primarily to deal with traffic spikes. Also, the test requests the same page over and over again. In the wild, traffic will likely be distributed over several pages. Surge’s longer cache time (10 minutes) means more pages will be served from cache in such a scenario.
8 thoughts on “Some quick performance tests with Surge, a new WordPress caching plugin”
Hey Roy, thanks for trying out Surge!
Great to see that you’ve adopted Nginx’s cache together with Surge, it’s truly a great combo. I’m using the same configuration across some production sites but with a 1 second expiry in Nginx, still works like a charm!
> As far as I know, NGINX caches in memory
If you’re referring to fastcgi_cache or proxy_cache, then Nginx and Surge are very much alike here. They both use the filesystem to store the cache data, relying heavily on the Linux kernel page cache (disk cache) for that data to constantly be available in-memory for quick access.
The notable difference though is that the metadata about the cache (when it expires, etc.) in Nginx is stored directly in a shared memory zone, while Surge stores this data in the same file as the cache itself, which might be slightly less efficient. However the 1-2 ms overhead is likely coming from just the need to execute PHP and communicate over the FastCGI protocol, rather than reading the metadata (which is hopefully also cached in the Linux kernel page cache).
Anyway, thanks again for taking Surge for a spin, and I really appreciate your bug findings!
Hi Konstantin. Thanks for your comment.
In my setup (which is essentially unmodified Ubuntu 20.04), NGINX’s ‘fastcgi_cache_path’ setting is set to ‘/var/run/nginx-cache’. The /var/run (and /run) folder use tmpfs, which as far as I can tell should mean caching is done in RAM, with disk swap as backup?
Ah, interesting! I don’t think there’s any added benefit to forcing the OS to keep your cache files in memory (physical or swap) with tmpfs, versus letting the kernel take care of paging data in and out as necessary (I read this as free LRU). I’d be curious to see some fresh benchmarks though! 🙂
What kind of benchmarks would you have in mind? I did my tests with this setup.
I guess with tmpfs being RAM + disk swap, SSDs having DRAM caches and OSs caching stuff there’s really no telling where exactly data is from.
What the benchmarks do seem to indicate is that it’s all very fast.
If Surge would provide microcache-like levels of performance on systems where microcache isn’t available (such as shared hosting), that alone would make it a valuable tool.
> What kind of benchmarks would you have in mind?
I mean tmpfs versus a regular ext4 for Nginx fastcgi_cache, see if that affects the actual requests per second with something like ab or hey.
I don’t really have the setup (or the time) to run proper comparitive tests, but I’m pretty sure someone will do some proper benchmarking.
Had some time today after all, so I updated the post with some hey benchmarks.
I’ll see about changing the NGINX cahe folder location. Since this is my live site, I’m hesitant to mess with the config.
Wow, thanks for running the additional tests Roy, the numbers look great! You’re definitely right about different URLs vs hammering the same URL, and it definitely shows on larger sites with a huge amount of posts.
I appreciate your testing and feedback!
Comments are closed.