Scaling WordPress to thousands of users

While WordPress is generally the ideal tool with which to build most of the websites I work on, it has several limitations when serving a large number of users. The following are just some of the things you can do to improve response time.


The WordPress Transient API allows us to store any type of temporary data so that it can be easily retrieved later. It’s perfect for storing the results of complex database queries or PHP arrays that we’ve had to sort or modify. Ultimately it’s just a key-value store that we set an expiration time on and this translates to two rows in the WordPress options table.

On a recent project, I used transients to store product search results, the key being the search term used and the value the 9 results our query returned. The expiration time was set to reflect how often the product data is updated (24 hours). Storing our result sets in a transient cut the request time from about 600ms to 250. At the time, this was substantially better than results I got from Redis and of course eliminated the dependency.



The InnoDB MySQL database engine offers us significant gains over something like MyISAM. Primarily it provides us with a number of fine-grained settings through which we can optimise our tables for efficiency, including ‘innodb_buffer_pool_size’ which determines how much memory to allocate to read/write processes. Another benefit of InnoDB is that we can perform full-text indexing, which is useful for searching on words or phrases.

Optimise Tables

To ensure that we’re querying our tables the right way, it’s incredibly important that we’re using the correct column types. For the tables that WordPress generates this isn’t so much of an issue but moreso when creating additional ones. Certainly dates should use the DATETIME or TIMESTAMP column types, and integer-only fields should use ‘INT’ (or any variation of) not ‘VARCHAR’ or ‘TEXT’.

MySQL also conveniently provides us with the ‘OPTIMIZE TABLE’ command, which can be used to defragment storage space and improve I/O efficiency. The command line utility ‘mysqlcheck’ not only does this, but also can ‘CHECK TABLE’, ‘REPAIR TABLE’ and ‘ANALYZE TABLE’ to identify any issues or inefficiencies in our current configuration. It’s worth noting that tables on which these commands run are locked. For large tables, running a process like this in production is no small task.


In querying custom tables, it’s important that we index our results correctly. For example, searching on an integer ID is much more efficient than search on a text string. It’s important that when updating this information we maintain the ties between our index and the table row. This is also immensely useful when updating data.

Where possible, I’d also advise trying to sort, join and map data in your database queries. It’s far more performant than bringing raw data into PHP and looping or filtering there.

Server-Side Caching

Since PHP 5.5 we’ve had the ability to use OpCache, a tool that stores precompiled PHP scripts in memory. It ultimately means that less frequently used scripts don’t need to be loaded and parsed on each request. While I can’t pretend to have used OpCache, there’s no shortage of evidence to suggest that it drastically improves response time.

Request Size

The speed at which we can make server-side requests is largely dictated by the size of the packets of data we send and receive. In order to reduce request time, especially when making Ajax calls, it’s important that we send to the server only the information that’s critical to the way our site works. Likewise we can reduce the time it takes for the server to respond by returning only the data we need.

Front-End Optimisations

While there are several improvements we can make on the back-end to our initial page response time, there’s a multitude of adjustments we can make on the front-end.


From personal experience, loading images with the correct dimensions and the right size has had the biggest effect on front-end performance on the sites I’ve worked on. In WordPress we can define image sizes using the ‘add_image_size()’ function. By pulling through the correct thumbnail, we reduce the time browsers take to parse and resize images, which itself is a render-blocking process.

In addition, we can lazy-load images using a script like blazy.js which won’t render images in the DOM until a user scrolls to the point in the page just before where the image appears. By reducing the number of images loaded on initial page load, we’re essentially deferring this time, which in theory has a smaller impact on the user.


A CDN serves a website’s static files from a server that’s nearest the location from which a user is requesting a page. Services like Cloudflare and Fastly do this for us and much more but act as a proxy between the server and users. Because of this we can do things like minify our files and defer scripts to the footer but even implement SSL and add DNS records. I can’t recommend Cloudlfare enough and its free tier makes it incredibly accessible.