Redis CLI Cheatsheet
Section titled “Redis CLI Cheatsheet”Redis is an in-memory data structure store used as a database, cache, and message broker. This reference progresses from basic key operations to advanced patterns used in production systems.
Connecting to Redis
Section titled “Connecting to Redis”# Connect to local instance (default: 127.0.0.1:6379)redis-cli
# Connect to a remote host with authenticationredis-cli -h <host> -p <port> -a <password>
# Test the connectionPING# Response: PONG
# Select a database (0–15, default is 0)SELECT 1
# Show server info and statsINFOINFO memoryINFO replicationBeginner
Section titled “Beginner”Key Management
Section titled “Key Management”Every value in Redis is stored under a key. Keys are binary-safe strings with a recommended naming convention of object:id:field (e.g., user:42:email).
| Command | Description |
|---|---|
SET key value | Set the string value of a key |
GET key | Get the value of a key |
DEL key [key ...] | Delete one or more keys |
EXISTS key [key ...] | Returns the count of existing keys |
RENAME key newkey | Rename a key |
TYPE key | Returns the data type of a key |
KEYS pattern | Find all keys matching a pattern (avoid in production) |
SCAN cursor [MATCH pattern] [COUNT n] | Iterate keys safely without blocking |
RANDOMKEY | Return a random key from the keyspace |
FLUSHDB | Delete all keys in the current database |
FLUSHALL | Delete all keys in all databases |
Warning:
KEYS *scans the entire keyspace and blocks the server. UseSCANin production.
Strings
Section titled “Strings”The simplest type. A key maps to a single string value (up to 512 MB).
| Command | Description |
|---|---|
SET key value | Set a value |
GET key | Get a value |
MSET k1 v1 k2 v2 | Set multiple keys at once |
MGET k1 k2 | Get multiple values at once |
APPEND key value | Append to an existing string |
STRLEN key | Get the length of the value |
INCR key | Increment integer value by 1 |
INCRBY key n | Increment by a specific integer |
DECR key | Decrement integer value by 1 |
DECRBY key n | Decrement by a specific integer |
GETSET key value | Set a new value and return the old one |
SETNX key value | Set only if key does not exist |
SET counter 0INCR counter # => 1INCRBY counter 5 # => 6Key Expiry
Section titled “Key Expiry”Expiry controls how long a key lives in memory. Expired keys are removed lazily and via a background sweep.
| Command | Description |
|---|---|
EXPIRE key seconds | Set expiry in seconds |
PEXPIRE key milliseconds | Set expiry in milliseconds |
EXPIREAT key timestamp | Set expiry as a Unix timestamp |
TTL key | Get remaining time to live (seconds); -1 = no expiry, -2 = does not exist |
PTTL key | Remaining TTL in milliseconds |
PERSIST key | Remove the expiry from a key |
SET session:abc "token"EXPIRE session:abc 3600 # expires in 1 hourTTL session:abc # => 3598 (or similar)You can also combine SET with expiry in one command:
SET key value EX 60 # expire in 60 secondsSET key value PX 5000 # expire in 5000 millisecondsSET key value EXAT 1800000000 # expire at Unix timestampSET key value NX EX 60 # set only if not exists, then expireIntermediate
Section titled “Intermediate”Ordered sequences of strings, implemented as a doubly-linked list. Use for queues, stacks, and activity feeds.
| Command | Description |
|---|---|
LPUSH key val [val ...] | Prepend one or more values |
RPUSH key val [val ...] | Append one or more values |
LPOP key [count] | Remove and return from the left |
RPOP key [count] | Remove and return from the right |
LLEN key | Get the length of the list |
LRANGE key start stop | Get a range of elements (0-indexed, -1 = last) |
LINDEX key index | Get element at a specific index |
LSET key index value | Set a specific index to a value |
LREM key count value | Remove occurrences of a value |
LTRIM key start stop | Trim a list to a range |
LINSERT key BEFORE|AFTER pivot value | Insert before or after a pivot |
BLPOP key timeout | Blocking pop from the left |
BRPOP key timeout | Blocking pop from the right |
# Queue (FIFO): push right, pop leftRPUSH jobs "task1" "task2"BLPOP jobs 0 # blocks until an item is available
# Stack (LIFO): push and pop from the same endLPUSH stack "a" "b"LPOP stack # => "b"Hashes
Section titled “Hashes”A map of field-value pairs stored under a single key. Ideal for representing objects like user profiles.
| Command | Description |
|---|---|
HSET key field value [field value ...] | Set one or more fields |
HGET key field | Get a single field |
HMGET key f1 f2 | Get multiple fields |
HGETALL key | Get all fields and values |
HDEL key field [field ...] | Delete fields |
HEXISTS key field | Check if a field exists |
HKEYS key | Get all field names |
HVALS key | Get all values |
HLEN key | Number of fields in the hash |
HINCRBY key field n | Increment a numeric field |
HSCAN key cursor [MATCH p] [COUNT n] | Iterate fields safely |
HSET user:1 name "Alice" email "alice@example.com" age 30HGET user:1 name # => "Alice"HGETALL user:1HINCRBY user:1 age 1 # age becomes 31Unordered collections of unique strings. Useful for tracking membership, tags, and relationships.
| Command | Description |
|---|---|
SADD key member [member ...] | Add members to a set |
SREM key member [member ...] | Remove members |
SMEMBERS key | Return all members |
SISMEMBER key member | Test if a value is in the set |
SMISMEMBER key m1 m2 | Test multiple members at once |
SCARD key | Count of members |
SPOP key [count] | Remove and return random members |
SRANDMEMBER key [count] | Return random members without removing |
SUNION k1 k2 | Union of multiple sets |
SINTER k1 k2 | Intersection of multiple sets |
SDIFF k1 k2 | Difference: members in k1 not in k2 |
SUNIONSTORE dest k1 k2 | Store union result in a new key |
SINTERSTORE dest k1 k2 | Store intersection result |
SADD page:visitors "user:1" "user:2" "user:3"SADD vip:users "user:2" "user:4"SINTER page:visitors vip:users # => "user:2"Sorted Sets
Section titled “Sorted Sets”Like sets, but each member has an associated floating-point score. Members are kept sorted by score. Use for leaderboards, priority queues, and range queries.
| Command | Description |
|---|---|
ZADD key score member [score member ...] | Add members with scores |
ZSCORE key member | Get the score of a member |
ZRANK key member | 0-based rank (ascending) |
ZREVRANK key member | Rank in descending order |
ZRANGE key start stop [WITHSCORES] | Range by rank (ascending) |
ZREVRANGE key start stop [WITHSCORES] | Range by rank (descending) |
ZRANGEBYSCORE key min max | Range by score |
ZRANGEBYLEX key min max | Range by lexicographic order (equal scores) |
ZCARD key | Number of members |
ZCOUNT key min max | Count members within a score range |
ZINCRBY key increment member | Increment a member’s score |
ZREM key member [member ...] | Remove members |
ZREMRANGEBYSCORE key min max | Remove members within a score range |
ZREMRANGEBYRANK key start stop | Remove members within a rank range |
ZUNIONSTORE dest n k1 k2 | Union of sorted sets |
ZINTERSTORE dest n k1 k2 | Intersection of sorted sets |
ZADD leaderboard 1500 "alice" 2200 "bob" 1800 "carol"ZREVRANGE leaderboard 0 2 WITHSCORES # top 3 playersZINCRBY leaderboard 100 "alice" # alice's score becomes 1600ZRANGEBYSCORE leaderboard 1500 2000 # players in score rangePub/Sub
Section titled “Pub/Sub”A publish/subscribe messaging pattern. Publishers send messages to channels; subscribers receive them. Note that Pub/Sub is fire-and-forget — messages are not persisted.
| Command | Description |
|---|---|
SUBSCRIBE channel [channel ...] | Subscribe to one or more channels |
UNSUBSCRIBE [channel ...] | Unsubscribe from channels |
PUBLISH channel message | Send a message to a channel |
PSUBSCRIBE pattern | Subscribe using a glob pattern (e.g., news.*) |
PUNSUBSCRIBE [pattern ...] | Unsubscribe from patterns |
PUBSUB CHANNELS [pattern] | List active channels |
PUBSUB NUMSUB [channel ...] | Subscriber count per channel |
# Terminal 1 — subscriberSUBSCRIBE notifications
# Terminal 2 — publisherPUBLISH notifications "Deploy completed successfully"Persistence
Section titled “Persistence”Redis offers two persistence mechanisms that can be used independently or together.
| Command | Description |
|---|---|
SAVE | Synchronous RDB snapshot (blocks the server) |
BGSAVE | Asynchronous RDB snapshot in a forked process |
BGREWRITEAOF | Rewrite the AOF file in the background |
LASTSAVE | Unix timestamp of the last successful BGSAVE |
DEBUG RELOAD | Reload the RDB file for testing |
RDB (Snapshotting): Periodic point-in-time snapshots. Fast restarts, smaller files, but potential data loss between snapshots.
AOF (Append-Only File): Logs every write operation. More durable, but slower and larger. Can be configured to fsync every second, every command, or never.
Configure in redis.conf:
# RDBsave 900 1 # save if 1 key changed in 900 secondssave 60 10000 # save if 10000 keys changed in 60 seconds
# AOFappendonly yesappendfsync everysecAdvanced
Section titled “Advanced”Transactions
Section titled “Transactions”Redis transactions allow a group of commands to execute atomically. No other client commands are interleaved between MULTI and EXEC.
| Command | Description |
|---|---|
MULTI | Begin a transaction block |
EXEC | Execute all queued commands |
DISCARD | Abort the transaction |
WATCH key [key ...] | Watch keys for optimistic locking |
UNWATCH | Cancel all WATCH commands |
WATCH account:balance
MULTIDECRBY account:balance 100INCRBY account:target 100EXEC# If account:balance was modified by another client between WATCH and EXEC,# EXEC returns nil and the transaction is aborted.Note: Redis transactions do not roll back on command errors within the block. A wrong data type mid-transaction still partially executes. Use
WATCHfor optimistic concurrency control.
Scripting with Lua
Section titled “Scripting with Lua”Lua scripts run atomically on the server. Prefer scripts over transactions for complex conditional logic.
| Command | Description |
|---|---|
EVAL script numkeys key [key ...] arg [arg ...] | Execute a Lua script |
EVALSHA sha1 numkeys key [key ...] arg [arg ...] | Execute a cached script by SHA |
SCRIPT LOAD script | Load a script into cache, returns SHA |
SCRIPT EXISTS sha1 [sha1 ...] | Check if scripts are cached |
SCRIPT FLUSH | Remove all cached scripts |
# Atomic get-and-increment patternEVAL " local val = redis.call('GET', KEYS[1]) if val then return redis.call('INCR', KEYS[1]) else return redis.call('SET', KEYS[1], 1) end" 1 mycounterStreams
Section titled “Streams”Redis Streams (introduced in Redis 5.0) provide a persistent, append-only log structure for event sourcing and message queuing with consumer groups.
| Command | Description |
|---|---|
XADD stream * field value [...] | Append an entry (auto-generate ID) |
XLEN stream | Number of entries |
XRANGE stream - + | Read all entries |
XREAD COUNT n STREAMS stream id | Read new entries since an ID |
XREVRANGE stream + - | Read in reverse order |
XGROUP CREATE stream group $ MKSTREAM | Create a consumer group |
XREADGROUP GROUP g consumer STREAMS s > | Read undelivered messages |
XACK stream group id | Acknowledge a message |
XPENDING stream group | List unacknowledged messages |
XTRIM stream MAXLEN n | Trim stream to a maximum length |
XDEL stream id | Delete a specific entry |
# ProducerXADD events * type "order.placed" order_id "42"
# Consumer group setupXGROUP CREATE events processors $ MKSTREAM
# Consumer reads and acknowledgesXREADGROUP GROUP processors worker1 COUNT 10 STREAMS events >XACK events processors <entry-id>HyperLogLog
Section titled “HyperLogLog”A probabilistic data structure for counting unique elements with approximately 0.81% standard error, using a fixed 12 KB of memory regardless of set size.
| Command | Description |
|---|---|
PFADD key element [element ...] | Add elements to the HyperLogLog |
PFCOUNT key [key ...] | Approximate count of unique elements |
PFMERGE dest src [src ...] | Merge multiple HyperLogLogs |
PFADD unique:visitors "user:1" "user:2" "user:1"PFCOUNT unique:visitors # => 2 (approximate)Geospatial
Section titled “Geospatial”Store and query geographic coordinates. Internally uses Sorted Sets with encoded scores.
| Command | Description |
|---|---|
GEOADD key lng lat member [...] | Add locations |
GEOPOS key member [member ...] | Get coordinates |
GEODIST key m1 m2 [unit] | Distance between two members (m, km, mi, ft) |
GEORADIUS key lng lat radius unit | Members within radius (deprecated in 6.2) |
GEOSEARCH key FROMMEMBER m BYRADIUS r unit | Search from a member (6.2+) |
GEOHASH key member [member ...] | Geohash encoding of coordinates |
GEOADD locations 13.361389 38.115556 "Palermo"GEOADD locations 15.087269 37.502669 "Catania"GEODIST locations Palermo Catania km # => 166.27GEOSEARCH locations FROMMEMBER Palermo BYRADIUS 200 km ASCServer Administration
Section titled “Server Administration”# Monitor all commands in real time (use only in development)MONITOR
# Get configuration valuesCONFIG GET maxmemoryCONFIG GET save
# Set configuration at runtimeCONFIG SET maxmemory 2gbCONFIG SET maxmemory-policy allkeys-lru
# Rewrite redis.conf with current runtime configCONFIG REWRITE
# Slow query logSLOWLOG GET 10SLOWLOG RESETSLOWLOG LEN
# Client managementCLIENT LISTCLIENT SETNAME my-serviceCLIENT KILL ID <id>
# Debug and diagnosticsDEBUG SLEEP 0OBJECT ENCODING key # internal encoding (ziplist, hashtable, etc.)OBJECT IDLETIME key # seconds since key was last accessedOBJECT FREQ key # access frequency (LFU policy)OBJECT REFCOUNT keyMEMORY USAGE key # memory in bytes for a keyMEMORY DOCTOR # recommendations for memory issuesEviction Policies
Section titled “Eviction Policies”When Redis reaches maxmemory, it uses the eviction policy to decide which keys to remove.
| Policy | Behavior |
|---|---|
noeviction | Return error when memory is full (default) |
allkeys-lru | Evict the least recently used key from all keys |
volatile-lru | Evict LRU from keys with an expiry set |
allkeys-lfu | Evict the least frequently used key |
volatile-lfu | Evict LFU from keys with an expiry set |
allkeys-random | Evict a random key from all keys |
volatile-random | Evict a random key with an expiry |
volatile-ttl | Evict the key with the shortest TTL |
CONFIG SET maxmemory 1gbCONFIG SET maxmemory-policy allkeys-lruReplication
Section titled “Replication”# On a replica, point to the primaryREPLICAOF <host> <port>
# Remove replication (promote to primary)REPLICAOF NO ONE
# Check replication statusINFO replicationCluster
Section titled “Cluster”# Node infoCLUSTER INFOCLUSTER NODESCLUSTER MYID
# Slot managementCLUSTER SLOTSCLUSTER KEYSLOT key # which slot a key belongs toCLUSTER COUNTKEYSINSLOT n # number of keys in a slot
# Manual failoverCLUSTER FAILOVERClient Libraries
Section titled “Client Libraries”=== “Python (redis-py)” ```python import redis
r = redis.Redis(host="localhost", port=6379, db=0, decode_responses=True)
# Stringr.set("key", "value", ex=60)val = r.get("key")
# Hashr.hset("user:1", mapping={"name": "Alice", "age": "30"})user = r.hgetall("user:1")
# Pipeline (batches commands in one round trip)pipe = r.pipeline()pipe.incr("counter")pipe.expire("counter", 3600)pipe.execute()
# Connection pool (recommended for production)pool = redis.ConnectionPool(host="localhost", port=6379, max_connections=20)r = redis.Redis(connection_pool=pool)```=== “Node.js (ioredis)” ```javascript import Redis from “ioredis”;
const redis = new Redis({ host: "localhost", port: 6379 });
// Stringawait redis.set("key", "value", "EX", 60);const val = await redis.get("key");
// Hashawait redis.hset("user:1", "name", "Alice", "age", "30");const user = await redis.hgetall("user:1");
// Pipelineconst pipeline = redis.pipeline();pipeline.incr("counter");pipeline.expire("counter", 3600);await pipeline.exec();```=== “Go (go-redis)” ```go import “github.com/redis/go-redis/v9”
rdb := redis.NewClient(&redis.Options{ Addr: "localhost:6379",})
ctx := context.Background()
// Stringerr := rdb.Set(ctx, "key", "value", time.Hour).Err()val, err := rdb.Get(ctx, "key").Result()
// Pipelinepipe := rdb.Pipeline()pipe.Incr(ctx, "counter")pipe.Expire(ctx, "counter", time.Hour)_, err = pipe.Exec(ctx)```Common Patterns
Section titled “Common Patterns”Distributed Lock (Redlock pattern)
# Acquire lock: SET with NX (not exists) and PX (millisecond expiry)SET lock:resource <unique-token> NX PX 30000
# Release lock (Lua script ensures atomicity)EVAL "if redis.call('GET', KEYS[1]) == ARGV[1] then return redis.call('DEL', KEYS[1]) else return 0 end" 1 lock:resource <unique-token>Rate Limiting
# Fixed window counterINCR rate:user:42:2024010112EXPIRE rate:user:42:2024010112 3600Caching with Automatic Expiry
SET cache:product:99 "<json>" EX 300 # cache for 5 minutesSET cache:product:99 "<json>" EX 300 XX # only update if key existsSession Store
HSET session:<token> user_id 42 created_at 1700000000EXPIRE session:<token> 86400 # 24-hour session