[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7C9F38DB-6164-4ACB-A717-1699ACC9DCB0@gmail.com>
Date: Fri, 18 Oct 2019 17:10:58 -0700
From: "Jonathan Lemon" <jonathan.lemon@...il.com>
To: "Saeed Mahameed" <saeedm@...lanox.com>
Cc: ilias.apalodimas@...aro.org, "Tariq Toukan" <tariqt@...lanox.com>,
brouer@...hat.com, Netdev <netdev@...r.kernel.org>,
kernel-team <kernel-team@...com>
Subject: Re: [PATCH 01/10 net-next] net/mlx5e: RX, Remove RX page-cache
I was running the updated patches on machines with various workloads, and
have a bunch of different results.
For the following numbers,
Effective = hit / (hit + empty + stall) * 100
In other words, show the hit rate for for every trip to the cache,
and the cache full stat is ignored.
On a webserver:
[web] # ./eff
('rx_pool_cache_hit:', '360127643')
('rx_pool_cache_full:', '0')
('rx_pool_cache_empty:', '6455735977')
('rx_pool_ring_produce:', '474958')
('rx_pool_ring_consume:', '0')
('rx_pool_ring_return:', '474958')
('rx_pool_flush:', '144')
('rx_pool_node_change:', '0')
cache effectiveness: 5.28
On a proxygen:
# ethtool -S eth0 | grep rx_pool
rx_pool_cache_hit: 1646798
rx_pool_cache_full: 0
rx_pool_cache_empty: 15723566
rx_pool_ring_produce: 474958
rx_pool_ring_consume: 0
rx_pool_ring_return: 474958
rx_pool_flush: 144
rx_pool_node_change: 0
cache effectiveness: 9.48
On both of these, only pages with refcount = 1 are being kept.
I changed things around in the page pool so:
1) the cache behaves like a ring instead of a stack, this
sacrifices temporal locality.
2) it caches all pages returned regardless of refcount, but
only returns pages with refcount=1.
This is the same behavior as the mlx5 cache. Some gains
would come about if the sojourn time though the cache is
greater than the lifetime of the page usage by the networking
stack, as it provides a fixed working set of mapped pages.
On the web server, this is a net loss:
[web] # ./eff
('rx_pool_cache_hit:', '6052662')
('rx_pool_cache_full:', '156355415')
('rx_pool_cache_empty:', '409600')
('rx_pool_cache_stall:', '302787473')
('rx_pool_ring_produce:', '156633847')
('rx_pool_ring_consume:', '9925520')
('rx_pool_ring_return:', '278788')
('rx_pool_flush:', '96')
('rx_pool_node_change:', '0')
cache effectiveness: 1.95720846778
For proxygen on the other hand, it's a win:
[proxy] # ./eff
('rx_pool_cache_hit:', '69235177')
('rx_pool_cache_full:', '35404387')
('rx_pool_cache_empty:', '460800')
('rx_pool_cache_stall:', '42932530')
('rx_pool_ring_produce:', '35717618')
('rx_pool_ring_consume:', '27879469')
('rx_pool_ring_return:', '404800')
('rx_pool_flush:', '108')
('rx_pool_node_change:', '0')
cache effectiveness: 61.4721608624
So the correct behavior isn't quite clear cut here - caching a
working set of mapped pages is beneficial in spite of the HOL
blocking stalls for some workloads, but I'm sure that it wouldn't
be too difficult to exceed the WS size.
Thoughts?
--
Jonathan
Powered by blists - more mailing lists