[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b73c8c15-3f10-d539-f648-5a0c772d9fc0@mellanox.com>
Date: Sun, 20 Oct 2019 07:29:07 +0000
From: Tariq Toukan <tariqt@...lanox.com>
To: Jonathan Lemon <jonathan.lemon@...il.com>,
Saeed Mahameed <saeedm@...lanox.com>
CC: "ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
Tariq Toukan <tariqt@...lanox.com>,
"brouer@...hat.com" <brouer@...hat.com>,
Netdev <netdev@...r.kernel.org>, kernel-team <kernel-team@...com>
Subject: Re: [PATCH 01/10 net-next] net/mlx5e: RX, Remove RX page-cache
On 10/19/2019 3:10 AM, Jonathan Lemon wrote:
> I was running the updated patches on machines with various workloads, and
> have a bunch of different results.
>
> For the following numbers,
> Effective = hit / (hit + empty + stall) * 100
>
> In other words, show the hit rate for for every trip to the cache,
> and the cache full stat is ignored.
>
> On a webserver:
>
> [web] # ./eff
> ('rx_pool_cache_hit:', '360127643')
> ('rx_pool_cache_full:', '0')
> ('rx_pool_cache_empty:', '6455735977')
> ('rx_pool_ring_produce:', '474958')
> ('rx_pool_ring_consume:', '0')
> ('rx_pool_ring_return:', '474958')
> ('rx_pool_flush:', '144')
> ('rx_pool_node_change:', '0')
> cache effectiveness: 5.28
>
> On a proxygen:
> # ethtool -S eth0 | grep rx_pool
> rx_pool_cache_hit: 1646798
> rx_pool_cache_full: 0
> rx_pool_cache_empty: 15723566
> rx_pool_ring_produce: 474958
> rx_pool_ring_consume: 0
> rx_pool_ring_return: 474958
> rx_pool_flush: 144
> rx_pool_node_change: 0
> cache effectiveness: 9.48
>
> On both of these, only pages with refcount = 1 are being kept.
>
>
> I changed things around in the page pool so:
>
> 1) the cache behaves like a ring instead of a stack, this
> sacrifices temporal locality.
>
> 2) it caches all pages returned regardless of refcount, but
> only returns pages with refcount=1.
>
> This is the same behavior as the mlx5 cache. Some gains
> would come about if the sojourn time though the cache is
> greater than the lifetime of the page usage by the networking
> stack, as it provides a fixed working set of mapped pages.
>
> On the web server, this is a net loss:
> [web] # ./eff
> ('rx_pool_cache_hit:', '6052662')
> ('rx_pool_cache_full:', '156355415')
> ('rx_pool_cache_empty:', '409600')
> ('rx_pool_cache_stall:', '302787473')
> ('rx_pool_ring_produce:', '156633847')
> ('rx_pool_ring_consume:', '9925520')
> ('rx_pool_ring_return:', '278788')
> ('rx_pool_flush:', '96')
> ('rx_pool_node_change:', '0')
> cache effectiveness: 1.95720846778
>
> For proxygen on the other hand, it's a win:
> [proxy] # ./eff
> ('rx_pool_cache_hit:', '69235177')
> ('rx_pool_cache_full:', '35404387')
> ('rx_pool_cache_empty:', '460800')
> ('rx_pool_cache_stall:', '42932530')
> ('rx_pool_ring_produce:', '35717618')
> ('rx_pool_ring_consume:', '27879469')
> ('rx_pool_ring_return:', '404800')
> ('rx_pool_flush:', '108')
> ('rx_pool_node_change:', '0')
> cache effectiveness: 61.4721608624
>
> So the correct behavior isn't quite clear cut here - caching a
> working set of mapped pages is beneficial in spite of the HOL
> blocking stalls for some workloads, but I'm sure that it wouldn't
> be too difficult to exceed the WS size.
>
> Thoughts?
>
We have a WIP in which we avoid the HOL block, by having pages returned
to the available-queue only when their refcnt reaches back to 1. This
requires catching this case in the page/skb release path.
See:
https://github.com/xdp-project/xdp-project/tree/master/areas/mem
Powered by blists - more mailing lists