Memcached is an open source, distributed memory object caching system that alleviates database load to speed up dynamic Web applications.
The system caches data and objects in memory to minimize the frequency with which an external database or API (application program interface) must be accessed.
In the Memcached system, each item comprises a key, an expiration time, optional flags, and raw data. When an item is requested, Memcached checks the expiration time to see if the item is still valid before returning it to the client. The cache can be seamlessly integrated with the application by ensuring that the cache is updated at the same time as the database.
By default, Memcached acts as a Least Recently Used cache plus expiration timeouts. If the server runs out of memory, it looks for expired items to replace. If additional memory is needed after replacing all the expired items, Memcached replaces items that have not been requested for a certain length of time (the expiration timeout period or longer), keeping more recently requested information in memory.
Memcached consists of four fundamental components:
Users of Memcached include Bebo, Craigslist, Digg, Flickr, LiveJournal, Mixi, Twitter, Typepad, Wikipedia, WordPress, Yellowbot, and YouTube.
memcached allows you to take memory from parts of your system where you have more than you need and make it accessible to areas where you have less than you need.
memcached also allows you to make better use of your memory. If you consider the diagram to the right, you can see two deployment scenarios:
The first scenario illustrates the classic deployment strategy, however, you’ll find that it’s both wasteful in the sense that the total cache size is a fraction of the actual capacity of your web farm, but also in the amount of effort required to keep the cache consistent across all of those nodes.
With memcached, you can see that all of the servers are looking into the same virtual pool of memory. This means that a given item is always stored and always retrieved from the same location in your entire web cluster.
Also, as the demand for your application grows to the point where you need to have more servers, it generally also grows in terms of the data that must be regularly accessed. A deployment strategy where these two aspects of your system scale together just make sense.
The illustration to the right only shows two web servers for simplicity, but the property remains the same as the number increases. If you had fifty web servers, you’d still have a usable cache size of 64MB in the first example, but in the second, you’d have 3.2GB of usable cache.
Of course, you aren’t required to use your web server’s memory for cache. Many Memcached users have dedicated machines that are built to only be Memcached servers.
The key features of Memcached are as follows −
Memcached is not −
Kubernetes Operational View provides a read-only system dashboard for multiple…
Mattermost is a hybrid cloud enterprise messaging workspace that brings your…
This chart bootstraps a nginx-lego deployment on a Kubernetes cluster using the…
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes