Caching Guidelines
| Field | Details |
|---|---|
| Status | Active |
| Last Updated | 05-11-2026 |
Purpose
To optimize application performance through effective caching strategies while maintaining data consistency
Scope
Applies to: All backend services using Redis, Memcached, or in-memory caching
Does not apply to: Browser caching, CDN caching
When to Cache
Good Candidates for Caching
- Data that's expensive to compute
- Data that's read frequently but changes rarely
- Database query results that are accessed often
- External API responses
- Session data
Don't Cache
- Personal/sensitive data (unless properly secured)
- Data that changes frequently
- Large datasets that won't fit in memory
Cache Key Naming
Format
<service>:<entity>:<identifier>:<version>
user:profile:123:v1
api:weather:london:2024-01-01
product:details:sku-789:v2
Rules
- Use colons (
:) as separators - Always include version for easy cache busting
- Keep keys short but descriptive
- Use lowercase
TTL (Time To Live) Strategy
| Data Type | TTL |
|---|---|
| User sessions | 30 minutes |
| API responses | 5 minutes |
| Static configurations | 1 hour |
| Product catalog | 10 minutes |
| User profiles | 5 minutes |
SET user:profile:123:v1 "{...}" EX 300 // 5 minutes
Cache Patterns
Cache-Aside (Lazy Loading)
Most common pattern - load data into cache only when needed:
1. Check cache for data
2. If miss, fetch from database
3. Store in cache with TTL
4. Return data
Pseudocode:
data = cache.get(key)
if (!data) {
data = database.query(...)
cache.set(key, data, ttl=300)
}
return data
Write-Through
Update cache whenever database is updated:
1. Update database
2. Update cache with new data
3. Return success
Good for: frequently read data
Cache Invalidation
Strategies
- TTL-based: Let cache expire naturally
- Event-based: Invalidate on specific events
- Manual: Provide admin endpoint to clear cache
DELETE user:profile:123:* // Clear all versions
FLUSHDB // Clear entire cache (use carefully!)
Cache Versioning
Increment version number when data structure changes:
user:profile:123:v1 → user:profile:123:v2
Monitoring
Key Metrics to Track
- Cache hit rate (target: >80%)
- Cache miss rate
- Eviction rate
- Memory usage
-
Response time improvement
-
Log cache hits/misses for analysis
- Set up alerts for low hit rates
Redis-Specific Best Practices
- Use Redis data types appropriately (String, Hash, List, Set)
- Enable persistence for critical data
- Set maxmemory policy (recommend:
allkeys-lru) - Use pipelining for batch operations
HSET user:123 name "John" email "john@example.com"
EXPIRE user:123 300
Common Pitfalls
- Cache Stampede: Multiple processes rebuilding cache simultaneously
-
Solution: Use locks or probabilistic early expiration
-
Memory Overflow: Cache growing unbounded
-
Solution: Set maxmemory and eviction policy
-
Stale Data: Serving outdated information
- Solution: Implement proper invalidation strategy
Exceptions
Real-time data (stock prices, live scores) may require different caching strategies or no caching
Related Documents
Changelog
| Version | Date | Author | Change |
|---|---|---|---|
| 1.0.0 | 05-11-2026 | Tibin Sunny | Initial version |