Memcached is a widely adopted, memory object caching system that is known for its simplicity and efficiency. It is designed for speed and scalability, particularly in read-heavy workloads.
ElastiCache for Memcached offers serverless caching, which simplifies adding and operating a Memcached based cache for your application. ElastiCache for Memcached Serverless enables you to create a highly available cache in under a minute, and eliminates the need to provision instances or configure nodes or clusters.
ElastiCache Serverless also removes the need to plan and manage caching capacity. ElastiCache constantly monitors the cache’s memory and compute used by your application, and automatically scales capacity to meet the needs of your application. ElastiCache offers a simple endpoint experience for developers, by abstracting the underlying cache infrastructure and software. ElastiCache manages hardware provisioning, monitoring, node replacements, and software patching automatically and transparently, so that you can focus application development, rather than operating the cache.
Key Benefits
No capacity planning: ElastiCache Serverless removes the need for you to plan for capacity. ElastiCache Serverless continuously monitors the memory, compute, and network bandwidth utilization of your cache and scales both vertically and horizontally. It allows a cache node to grow in size, while in parallel initiating a scale-out operation to ensure that the cache can scale to meet your application requirements at all times.
Pay-per-use: With ElastiCache Serverless, you pay for the data stored and compute utilized by your workload on the cache. See
High-availability: ElastiCache Serverless automatically replicates your data across multiple Availability Zones (AZ) for high-availability. It automatically monitors the underlying cache nodes and replaces them in case of failures. It offers a 99.99% availability SLA for every cache.
Automatic software upgrades: ElastiCache Serverless automatically upgrades your cache to the latest minor and patch software version without any availability impact to your application. When a new Memcached major version is available, ElastiCache will send you a notification.
Security: Serverless always encrypts data in transit and at rest. You can use a service managed key or use your own Customer Managed Key to encrypt data at rest.
Simplest Model: Easy to set up and use, making it an excellent choice for basic caching needs.
Scalability:
Horizontal Scaling: Scales out/in by adding or removing nodes.
Vertical Scaling: Scales up/down by changing the node family/type.
Multi-Threaded: Supports multiple cores/threads for handling high traffic.
Ideal Use Cases:
Caching database contents.
Caching dynamically generated web page data.
Storing transient session data.
High-frequency counters for admission control in high volume web applications.
Node Configuration:
Up to 100 nodes per region.
1-20 nodes per cluster (soft limits).
Integration and Management:
Integrates with SNS for node failure/recovery notifications.
Supports auto-discovery for nodes added/removed from the cluster.
Limitations:
No multi-AZ failover or replication support.
No snapshot support.
Each node represents a partition of data and can be placed in different AZs for redundancy.
Redis is an open-source, in-memory key-value store that supports more complex data structures and offers persistence, making it suitable for both caching and as a primary data store.
Key Features:
Complex Data Structures: Supports sorted sets, lists, hashes, and more.
Persistence: Data can be persistent, enabling it to be used as a primary data store.
Replication and High Availability:
Supports master/slave replication.
Multi-AZ deployment for cross-AZ redundancy.
Automatic failover and backup/restore.
Scalability:
Sharding: Scales by adding shards, not nodes.
Each shard can have a primary node and 0-5 read replicas.
Snapshots and Backups:
Supports automatic and manual snapshots.
Backups include cluster data and metadata.
Automated backups are enabled by default.
Cluster Modes:
Clustering Mode Disabled:
One shard with one primary node and up to 5 read replicas.
Replicas can be distributed over multiple AZs.
Clustering Mode Enabled:
Up to 15 shards, each with one primary node and up to 5 read replicas.
Suitable for large-scale, distributed caching solutions.
Some applications, such as media catalog updates require high frequency reads, and consistent throughput. For such applications, customers often complement S3 with an in-memory cache, such as Amazon ElastiCache for Redis, to reduce the S3 retrieval cost and to improve performance.
ElastiCache for Redis is a fully managed, in-memory data store that provides sub-millisecond latency performance with high throughput. ElastiCache for Redis complements S3 in the following ways:
Redis stores data in-memory, so it provides sub-millisecond latency and supports incredibly high requests per second.
It supports key/value based operations that map well to S3 operations (for example, GET/SET => GET/PUT), making it easy to write code for both S3 and ElastiCache.
It can be implemented as an application side cache. This allows you to use S3 as your persistent store and benefit from its durability, availability, and low cost. Your applications decide what objects to cache, when to cache them, and how to cache them.
When deciding whether to use ElastiCache for Redis or Amazon MemoryDB for Redis consider the following comparisons:
ElastiCache for Redis is a service that is commonly used to cache data from other databases and data stores using Redis. You should consider ElastiCache for Redis for caching workloads where you want to accelerate data access with your existing primary database or data store (microsecond read and write performance). You should also consider ElastiCache for Redis for use cases where you want to use the Redis data structures and APIs to access data stored in a primary database or data store.
Amazon MemoryDB for Redis is a durable, in-memory database for workloads that require an ultra-fast, primary database. You should consider using MemoryDB if your workload requires a durable database that provides ultra-fast performance (microsecond read and single-digit millisecond write latency). MemoryDB may also be a good fit for your use case if you want to build an application using Redis data structures and APIs with a primary, durable database. Finally, you should consider using MemoryDB to simplify your application architecture and lower costs by replacing usage of a database with a cache for durability and performance.
Authenticating with the Redis AUTH command
Redis authentication tokens or passwords enable Redis to require a password before allowing clients to run commands, thereby improving data security. Redis AUTH is available for self-designed clusters only.