Cache
Aerostack Cache is a globally replicated key-value store with sub-millisecond read latency at the edge.
Quick start
import { sdk } from '@aerostack/sdk'
// Set a value with TTL
await sdk.cache.set('user:profile:123', userData, { ttl: 3600 }) // 1 hour
// Get a value
const cached = await sdk.cache.get('user:profile:123')
if (!cached) {
// Cache miss — fetch from DB
}
// Delete
await sdk.cache.delete('user:profile:123')Methods
| Method | Description |
|---|---|
sdk.cache.set(key, value, options?) | Store a value |
sdk.cache.get(key) | Retrieve a value (null if missing or expired) |
sdk.cache.delete(key) | Remove a key |
sdk.cache.has(key) | Check if a key exists |
Options
await sdk.cache.set(key, value, {
ttl: 300, // seconds until expiry (default: no expiry)
metadata: { // optional metadata stored alongside the value
userId: '123',
source: 'db'
}
})Cache-aside pattern
async function getUser(userId: string) {
const cacheKey = `user:${userId}`
// 1. Try cache
const cached = await sdk.cache.get(cacheKey)
if (cached) return cached
// 2. Fetch from DB
const user = await sdk.db.queryOne(
'SELECT * FROM users WHERE id = ?',
[userId]
)
// 3. Cache the result
if (user) {
await sdk.cache.set(cacheKey, user, { ttl: 300 })
}
return user
}Cache invalidation
// On profile update, invalidate the cached profile
async function updateUser(userId: string, updates: object) {
await sdk.db.query(
'UPDATE users SET name = ? WHERE id = ?',
[updates.name, userId]
)
// Invalidate cache
await sdk.cache.delete(`user:${userId}`)
}The cache has eventual consistency — writes may take up to 60 seconds to propagate globally. For strong consistency requirements, use the database directly.
Use Cases
Session storage
Store user session data (preferences, cart contents, recent activity) in the cache with a TTL matching your session duration. Cache reads are sub-millisecond at the edge, so session lookups never become a bottleneck.
API response caching
Cache expensive database queries or third-party API responses. For example, cache a leaderboard query for 30 seconds so that thousands of concurrent readers all hit the cache instead of re-running the aggregation.
const cacheKey = 'leaderboard:top100'
let leaderboard = await sdk.cache.get(cacheKey)
if (!leaderboard) {
leaderboard = await sdk.db.query(
'SELECT user_id, score FROM scores ORDER BY score DESC LIMIT 100'
)
await sdk.cache.set(cacheKey, leaderboard, { ttl: 30 })
}Rate limit counters
Track per-user or per-IP request counts with short TTLs. Increment a counter on each request and reject requests that exceed the threshold. The eventual consistency window is acceptable for rate limiting since brief overages are tolerable.
Feature flags
Store feature flags as cache keys (feature:dark-mode, feature:new-checkout) with JSON values describing who gets access. Your app reads the flag on each request with sub-millisecond latency, and you update flags from your admin panel without redeploying.
Hot data layer
Keep frequently accessed reference data (pricing tiers, configuration, translation strings) in the cache so your functions never need to query the database for data that rarely changes. Set a generous TTL and invalidate on updates.