Skip to content

Redis CLI commands and data structure reference.

Redis is an in-memory data structure store used as a database, cache, and message broker. This reference progresses from basic key operations to advanced patterns used in production systems.


Terminal window
# Connect to local instance (default: 127.0.0.1:6379)
redis-cli
# Connect to a remote host with authentication
redis-cli -h <host> -p <port> -a <password>
# Test the connection
PING
# Response: PONG
# Select a database (0–15, default is 0)
SELECT 1
# Show server info and stats
INFO
INFO memory
INFO replication

Every value in Redis is stored under a key. Keys are binary-safe strings with a recommended naming convention of object:id:field (e.g., user:42:email).

CommandDescription
SET key valueSet the string value of a key
GET keyGet the value of a key
DEL key [key ...]Delete one or more keys
EXISTS key [key ...]Returns the count of existing keys
RENAME key newkeyRename a key
TYPE keyReturns the data type of a key
KEYS patternFind all keys matching a pattern (avoid in production)
SCAN cursor [MATCH pattern] [COUNT n]Iterate keys safely without blocking
RANDOMKEYReturn a random key from the keyspace
FLUSHDBDelete all keys in the current database
FLUSHALLDelete all keys in all databases

Warning: KEYS * scans the entire keyspace and blocks the server. Use SCAN in production.


The simplest type. A key maps to a single string value (up to 512 MB).

CommandDescription
SET key valueSet a value
GET keyGet a value
MSET k1 v1 k2 v2Set multiple keys at once
MGET k1 k2Get multiple values at once
APPEND key valueAppend to an existing string
STRLEN keyGet the length of the value
INCR keyIncrement integer value by 1
INCRBY key nIncrement by a specific integer
DECR keyDecrement integer value by 1
DECRBY key nDecrement by a specific integer
GETSET key valueSet a new value and return the old one
SETNX key valueSet only if key does not exist
Terminal window
SET counter 0
INCR counter # => 1
INCRBY counter 5 # => 6

Expiry controls how long a key lives in memory. Expired keys are removed lazily and via a background sweep.

CommandDescription
EXPIRE key secondsSet expiry in seconds
PEXPIRE key millisecondsSet expiry in milliseconds
EXPIREAT key timestampSet expiry as a Unix timestamp
TTL keyGet remaining time to live (seconds); -1 = no expiry, -2 = does not exist
PTTL keyRemaining TTL in milliseconds
PERSIST keyRemove the expiry from a key
Terminal window
SET session:abc "token"
EXPIRE session:abc 3600 # expires in 1 hour
TTL session:abc # => 3598 (or similar)

You can also combine SET with expiry in one command:

Terminal window
SET key value EX 60 # expire in 60 seconds
SET key value PX 5000 # expire in 5000 milliseconds
SET key value EXAT 1800000000 # expire at Unix timestamp
SET key value NX EX 60 # set only if not exists, then expire

Ordered sequences of strings, implemented as a doubly-linked list. Use for queues, stacks, and activity feeds.

CommandDescription
LPUSH key val [val ...]Prepend one or more values
RPUSH key val [val ...]Append one or more values
LPOP key [count]Remove and return from the left
RPOP key [count]Remove and return from the right
LLEN keyGet the length of the list
LRANGE key start stopGet a range of elements (0-indexed, -1 = last)
LINDEX key indexGet element at a specific index
LSET key index valueSet a specific index to a value
LREM key count valueRemove occurrences of a value
LTRIM key start stopTrim a list to a range
LINSERT key BEFORE|AFTER pivot valueInsert before or after a pivot
BLPOP key timeoutBlocking pop from the left
BRPOP key timeoutBlocking pop from the right
Terminal window
# Queue (FIFO): push right, pop left
RPUSH jobs "task1" "task2"
BLPOP jobs 0 # blocks until an item is available
# Stack (LIFO): push and pop from the same end
LPUSH stack "a" "b"
LPOP stack # => "b"

A map of field-value pairs stored under a single key. Ideal for representing objects like user profiles.

CommandDescription
HSET key field value [field value ...]Set one or more fields
HGET key fieldGet a single field
HMGET key f1 f2Get multiple fields
HGETALL keyGet all fields and values
HDEL key field [field ...]Delete fields
HEXISTS key fieldCheck if a field exists
HKEYS keyGet all field names
HVALS keyGet all values
HLEN keyNumber of fields in the hash
HINCRBY key field nIncrement a numeric field
HSCAN key cursor [MATCH p] [COUNT n]Iterate fields safely
Terminal window
HSET user:1 name "Alice" email "alice@example.com" age 30
HGET user:1 name # => "Alice"
HGETALL user:1
HINCRBY user:1 age 1 # age becomes 31

Unordered collections of unique strings. Useful for tracking membership, tags, and relationships.

CommandDescription
SADD key member [member ...]Add members to a set
SREM key member [member ...]Remove members
SMEMBERS keyReturn all members
SISMEMBER key memberTest if a value is in the set
SMISMEMBER key m1 m2Test multiple members at once
SCARD keyCount of members
SPOP key [count]Remove and return random members
SRANDMEMBER key [count]Return random members without removing
SUNION k1 k2Union of multiple sets
SINTER k1 k2Intersection of multiple sets
SDIFF k1 k2Difference: members in k1 not in k2
SUNIONSTORE dest k1 k2Store union result in a new key
SINTERSTORE dest k1 k2Store intersection result
Terminal window
SADD page:visitors "user:1" "user:2" "user:3"
SADD vip:users "user:2" "user:4"
SINTER page:visitors vip:users # => "user:2"

Like sets, but each member has an associated floating-point score. Members are kept sorted by score. Use for leaderboards, priority queues, and range queries.

CommandDescription
ZADD key score member [score member ...]Add members with scores
ZSCORE key memberGet the score of a member
ZRANK key member0-based rank (ascending)
ZREVRANK key memberRank in descending order
ZRANGE key start stop [WITHSCORES]Range by rank (ascending)
ZREVRANGE key start stop [WITHSCORES]Range by rank (descending)
ZRANGEBYSCORE key min maxRange by score
ZRANGEBYLEX key min maxRange by lexicographic order (equal scores)
ZCARD keyNumber of members
ZCOUNT key min maxCount members within a score range
ZINCRBY key increment memberIncrement a member’s score
ZREM key member [member ...]Remove members
ZREMRANGEBYSCORE key min maxRemove members within a score range
ZREMRANGEBYRANK key start stopRemove members within a rank range
ZUNIONSTORE dest n k1 k2Union of sorted sets
ZINTERSTORE dest n k1 k2Intersection of sorted sets
Terminal window
ZADD leaderboard 1500 "alice" 2200 "bob" 1800 "carol"
ZREVRANGE leaderboard 0 2 WITHSCORES # top 3 players
ZINCRBY leaderboard 100 "alice" # alice's score becomes 1600
ZRANGEBYSCORE leaderboard 1500 2000 # players in score range

A publish/subscribe messaging pattern. Publishers send messages to channels; subscribers receive them. Note that Pub/Sub is fire-and-forget — messages are not persisted.

CommandDescription
SUBSCRIBE channel [channel ...]Subscribe to one or more channels
UNSUBSCRIBE [channel ...]Unsubscribe from channels
PUBLISH channel messageSend a message to a channel
PSUBSCRIBE patternSubscribe using a glob pattern (e.g., news.*)
PUNSUBSCRIBE [pattern ...]Unsubscribe from patterns
PUBSUB CHANNELS [pattern]List active channels
PUBSUB NUMSUB [channel ...]Subscriber count per channel
Terminal window
# Terminal 1 — subscriber
SUBSCRIBE notifications
# Terminal 2 — publisher
PUBLISH notifications "Deploy completed successfully"

Redis offers two persistence mechanisms that can be used independently or together.

CommandDescription
SAVESynchronous RDB snapshot (blocks the server)
BGSAVEAsynchronous RDB snapshot in a forked process
BGREWRITEAOFRewrite the AOF file in the background
LASTSAVEUnix timestamp of the last successful BGSAVE
DEBUG RELOADReload the RDB file for testing

RDB (Snapshotting): Periodic point-in-time snapshots. Fast restarts, smaller files, but potential data loss between snapshots.

AOF (Append-Only File): Logs every write operation. More durable, but slower and larger. Can be configured to fsync every second, every command, or never.

Configure in redis.conf:

# RDB
save 900 1 # save if 1 key changed in 900 seconds
save 60 10000 # save if 10000 keys changed in 60 seconds
# AOF
appendonly yes
appendfsync everysec

Redis transactions allow a group of commands to execute atomically. No other client commands are interleaved between MULTI and EXEC.

CommandDescription
MULTIBegin a transaction block
EXECExecute all queued commands
DISCARDAbort the transaction
WATCH key [key ...]Watch keys for optimistic locking
UNWATCHCancel all WATCH commands
Terminal window
WATCH account:balance
MULTI
DECRBY account:balance 100
INCRBY account:target 100
EXEC
# If account:balance was modified by another client between WATCH and EXEC,
# EXEC returns nil and the transaction is aborted.

Note: Redis transactions do not roll back on command errors within the block. A wrong data type mid-transaction still partially executes. Use WATCH for optimistic concurrency control.


Lua scripts run atomically on the server. Prefer scripts over transactions for complex conditional logic.

CommandDescription
EVAL script numkeys key [key ...] arg [arg ...]Execute a Lua script
EVALSHA sha1 numkeys key [key ...] arg [arg ...]Execute a cached script by SHA
SCRIPT LOAD scriptLoad a script into cache, returns SHA
SCRIPT EXISTS sha1 [sha1 ...]Check if scripts are cached
SCRIPT FLUSHRemove all cached scripts
Terminal window
# Atomic get-and-increment pattern
EVAL "
local val = redis.call('GET', KEYS[1])
if val then
return redis.call('INCR', KEYS[1])
else
return redis.call('SET', KEYS[1], 1)
end
" 1 mycounter

Redis Streams (introduced in Redis 5.0) provide a persistent, append-only log structure for event sourcing and message queuing with consumer groups.

CommandDescription
XADD stream * field value [...]Append an entry (auto-generate ID)
XLEN streamNumber of entries
XRANGE stream - +Read all entries
XREAD COUNT n STREAMS stream idRead new entries since an ID
XREVRANGE stream + -Read in reverse order
XGROUP CREATE stream group $ MKSTREAMCreate a consumer group
XREADGROUP GROUP g consumer STREAMS s >Read undelivered messages
XACK stream group idAcknowledge a message
XPENDING stream groupList unacknowledged messages
XTRIM stream MAXLEN nTrim stream to a maximum length
XDEL stream idDelete a specific entry
Terminal window
# Producer
XADD events * type "order.placed" order_id "42"
# Consumer group setup
XGROUP CREATE events processors $ MKSTREAM
# Consumer reads and acknowledges
XREADGROUP GROUP processors worker1 COUNT 10 STREAMS events >
XACK events processors <entry-id>

A probabilistic data structure for counting unique elements with approximately 0.81% standard error, using a fixed 12 KB of memory regardless of set size.

CommandDescription
PFADD key element [element ...]Add elements to the HyperLogLog
PFCOUNT key [key ...]Approximate count of unique elements
PFMERGE dest src [src ...]Merge multiple HyperLogLogs
Terminal window
PFADD unique:visitors "user:1" "user:2" "user:1"
PFCOUNT unique:visitors # => 2 (approximate)

Store and query geographic coordinates. Internally uses Sorted Sets with encoded scores.

CommandDescription
GEOADD key lng lat member [...]Add locations
GEOPOS key member [member ...]Get coordinates
GEODIST key m1 m2 [unit]Distance between two members (m, km, mi, ft)
GEORADIUS key lng lat radius unitMembers within radius (deprecated in 6.2)
GEOSEARCH key FROMMEMBER m BYRADIUS r unitSearch from a member (6.2+)
GEOHASH key member [member ...]Geohash encoding of coordinates
Terminal window
GEOADD locations 13.361389 38.115556 "Palermo"
GEOADD locations 15.087269 37.502669 "Catania"
GEODIST locations Palermo Catania km # => 166.27
GEOSEARCH locations FROMMEMBER Palermo BYRADIUS 200 km ASC

Terminal window
# Monitor all commands in real time (use only in development)
MONITOR
# Get configuration values
CONFIG GET maxmemory
CONFIG GET save
# Set configuration at runtime
CONFIG SET maxmemory 2gb
CONFIG SET maxmemory-policy allkeys-lru
# Rewrite redis.conf with current runtime config
CONFIG REWRITE
# Slow query log
SLOWLOG GET 10
SLOWLOG RESET
SLOWLOG LEN
# Client management
CLIENT LIST
CLIENT SETNAME my-service
CLIENT KILL ID <id>
# Debug and diagnostics
DEBUG SLEEP 0
OBJECT ENCODING key # internal encoding (ziplist, hashtable, etc.)
OBJECT IDLETIME key # seconds since key was last accessed
OBJECT FREQ key # access frequency (LFU policy)
OBJECT REFCOUNT key
MEMORY USAGE key # memory in bytes for a key
MEMORY DOCTOR # recommendations for memory issues

When Redis reaches maxmemory, it uses the eviction policy to decide which keys to remove.

PolicyBehavior
noevictionReturn error when memory is full (default)
allkeys-lruEvict the least recently used key from all keys
volatile-lruEvict LRU from keys with an expiry set
allkeys-lfuEvict the least frequently used key
volatile-lfuEvict LFU from keys with an expiry set
allkeys-randomEvict a random key from all keys
volatile-randomEvict a random key with an expiry
volatile-ttlEvict the key with the shortest TTL
Terminal window
CONFIG SET maxmemory 1gb
CONFIG SET maxmemory-policy allkeys-lru

Terminal window
# On a replica, point to the primary
REPLICAOF <host> <port>
# Remove replication (promote to primary)
REPLICAOF NO ONE
# Check replication status
INFO replication

Terminal window
# Node info
CLUSTER INFO
CLUSTER NODES
CLUSTER MYID
# Slot management
CLUSTER SLOTS
CLUSTER KEYSLOT key # which slot a key belongs to
CLUSTER COUNTKEYSINSLOT n # number of keys in a slot
# Manual failover
CLUSTER FAILOVER

=== “Python (redis-py)” ```python import redis

r = redis.Redis(host="localhost", port=6379, db=0, decode_responses=True)
# String
r.set("key", "value", ex=60)
val = r.get("key")
# Hash
r.hset("user:1", mapping={"name": "Alice", "age": "30"})
user = r.hgetall("user:1")
# Pipeline (batches commands in one round trip)
pipe = r.pipeline()
pipe.incr("counter")
pipe.expire("counter", 3600)
pipe.execute()
# Connection pool (recommended for production)
pool = redis.ConnectionPool(host="localhost", port=6379, max_connections=20)
r = redis.Redis(connection_pool=pool)
```

=== “Node.js (ioredis)” ```javascript import Redis from “ioredis”;

const redis = new Redis({ host: "localhost", port: 6379 });
// String
await redis.set("key", "value", "EX", 60);
const val = await redis.get("key");
// Hash
await redis.hset("user:1", "name", "Alice", "age", "30");
const user = await redis.hgetall("user:1");
// Pipeline
const pipeline = redis.pipeline();
pipeline.incr("counter");
pipeline.expire("counter", 3600);
await pipeline.exec();
```

=== “Go (go-redis)” ```go import “github.com/redis/go-redis/v9”

rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
ctx := context.Background()
// String
err := rdb.Set(ctx, "key", "value", time.Hour).Err()
val, err := rdb.Get(ctx, "key").Result()
// Pipeline
pipe := rdb.Pipeline()
pipe.Incr(ctx, "counter")
pipe.Expire(ctx, "counter", time.Hour)
_, err = pipe.Exec(ctx)
```

Distributed Lock (Redlock pattern)

Terminal window
# Acquire lock: SET with NX (not exists) and PX (millisecond expiry)
SET lock:resource <unique-token> NX PX 30000
# Release lock (Lua script ensures atomicity)
EVAL "if redis.call('GET', KEYS[1]) == ARGV[1] then return redis.call('DEL', KEYS[1]) else return 0 end" 1 lock:resource <unique-token>

Rate Limiting

Terminal window
# Fixed window counter
INCR rate:user:42:2024010112
EXPIRE rate:user:42:2024010112 3600

Caching with Automatic Expiry

Terminal window
SET cache:product:99 "<json>" EX 300 # cache for 5 minutes
SET cache:product:99 "<json>" EX 300 XX # only update if key exists

Session Store

Terminal window
HSET session:<token> user_id 42 created_at 1700000000
EXPIRE session:<token> 86400 # 24-hour session