Content Store
The Content Store (CS) is the in-router cache in CCNP that stores data packets for future use. As one of the three primary data structures in CCNP routers (alongside the Forwarding Information Base and Pending Interest Table), the Content Store is fundamental to CCNP's caching model—enabling the network itself to function as a distributed cache and reducing both latency and bandwidth consumption.
Function and Purpose
The Content Store serves as a local cache for content that has traversed the router. When interest packets arrive requesting content, the router checks its Content Store first. If the requested content is cached, the router immediately satisfies the interest from cache without forwarding it further.
This design represents a fundamental departure from traditional IP networking, where routers merely forward packets without caching content. In CCNP, every router can potentially serve as a cache node, creating a distributed storage network that scales with demand. Popular content is naturally replicated throughout the network, close to users who request it.
Cache Operation
Content enters the Content Store through several mechanisms:
Cache-on-fetch: The most common mechanism—content is cached as it returns from an origin server. When a data packet traverses a router, the router stores a copy before forwarding it to the requester. This ensures that the next request for the same content can be served locally.
Prefetching: Advanced routers can predict content needs and proactively cache content before explicit requests. Prefetching uses patterns from previous requests and FIB hints about content popularity.
Push-based caching: Content publishers can push content to specific routers or regions, ensuring early availability. This mechanism is used for time-sensitive content like news or live updates.
Each cached entry includes the content name, the content itself, metadata like size and creation time, and a freshness indicator indicating how long the content should be considered valid.
Eviction Policies
The Content Store has finite capacity and must manage cache space through eviction policies. Common approaches include:
Least Recently Used (LRU): Evicts content that has not been accessed for the longest time. This simple policy performs well for many workloads.
Least Frequently Used (LFU): Evicts content with the lowest access frequency. Works well when access patterns follow power laws.
Size-aware eviction: Prefers evicting larger content to free more space per eviction, balancing space utilization.
Freshness-aware eviction: Prioritizes refreshing or evicting content approaching its freshness expiration.
Most routers support configurable eviction policies, allowing operators to tune performance for their specific traffic patterns.
Integration with Forwarding
The Content Store integrates with the Forwarding Information Base to optimize content delivery:
- When content exists in cache, the FIB is not consulted—the interest is served immediately
- The FIB can provide hints about content cached at neighboring routers
- Cached content can be used to satisfy interests forwarded from other routers
This tight integration between cache and forwarding is what makes CCNP's caching model so effective. The network learns where content is popular and automatically positions cache copies accordingly.
Content Name and Immutability
CCNP's immutability guarantee—content at a given name never changes—is essential to Content Store operation. Without this guarantee, cached content might become stale, requiring complex coherency mechanisms. With immutability, any cached copy of content with a given name is guaranteed to be identical to any other copy with the same name.
Applications that need mutable content embed version identifiers in the content name: /com.example/doc/v1 and /com.example/doc/v2 are different content names, each independently cacheable. This pattern shifts cache coherency from a network problem to an application design decision.
Scalability and Distribution
Individual router Content Stores are necessarily limited in capacity. Large-scale caching requires coordination across routers:
Hierarchical caching: Content Stores at different network levels have different characteristics. Edge routers have smaller caches but serve many requests. Core routers have larger caches but see less individual content.
Explicit caching: Content publishers can specify caching requirements through publication metadata, controlling how aggressively content is cached.
Cache hierarchies: Routers can be organized into cache hierarchies where child routers consult parent caches before forwarding further. This reduces origin requests while limiting overall cache capacity requirements.
Performance Impact
The Content Store dramatically improves CCNP performance:
- Latency reduction: Cached content can be served in microseconds versus milliseconds for origin retrieval
- Bandwidth reduction: Popular content served from cache never traverses expensive links
- Origin load reduction: Caches absorb significant request volume, reducing load on origin servers
Studies show that properly configured Content Stores satisfy 70-90% of requests for popular content, dramatically reducing network-wide traffic and improving user experience.
Content Store in Different Node Types
The Content Store is implemented differently across node types:
Edge routers: Limited capacity (gigabytes to terabytes), optimized for low-latency serving of popular local content
Core routers: Larger capacity (terabytes), optimized for throughput and handling aggregate traffic
Residential routers: Small capacity (megabytes), modest caching for household content
Data centers: Substantial cache capacity serving as regional content hubs