[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56c49677-8b55-b337-7f5a-c297a11c82b0@gmail.com>
Date: Wed, 11 Jul 2018 04:32:20 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Li RongQing <lirongqing@...du.com>, netdev@...r.kernel.org
Subject: Re: [PATCH] net: convert gro_count to bitmask
On 07/11/2018 02:15 AM, Li RongQing wrote:
> gro_hash size is 192 bytes, and uses 3 cache lines, if there is few
> flows, gro_hash may be not fully used, so it is unnecessary to iterate
> all gro_hash in napi_gro_flush(), to occupy unnecessary cacheline.
>
> convert gro_count to a bitmask, and rename it as gro_bitmask, each bit
> represents a element of gro_hash, only flush a gro_hash element if the
> related bit is set, to speed up napi_gro_flush().
>
> and update gro_bitmask only if it will be changed, to reduce cache
> update
>
> Suggested-by: Eric Dumazet <edumazet@...gle.com>
> Signed-off-by: Li RongQing <lirongqing@...du.com>
> ---
> include/linux/netdevice.h | 2 +-
> net/core/dev.c | 35 +++++++++++++++++++++++------------
> 2 files changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index b683971e500d..df49b36ef378 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -322,7 +322,7 @@ struct napi_struct {
>
> unsigned long state;
> int weight;
> - unsigned int gro_count;
> + unsigned long gro_bitmask;
> int (*poll)(struct napi_struct *, int);
> #ifdef CONFIG_NETPOLL
> int poll_owner;
> diff --git a/net/core/dev.c b/net/core/dev.c
> index d13cddcac41f..a08dbdd217a6 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -5171,9 +5171,11 @@ static void __napi_gro_flush_chain(struct napi_struct *napi, u32 index,
> return;
> list_del_init(&skb->list);
> napi_gro_complete(skb);
> - napi->gro_count--;
> napi->gro_hash[index].count--;
> }
> +
> + if (!napi->gro_hash[index].count)
> + clear_bit(index, &napi->gro_bitmask);
I suggest you not add an atomic operation here.
Current cpu owns this NAPI after all.
Same remark for the whole patch.
-> __clear_bit(), __set_bit() and similar operators
Ideally you should provide TCP_RR number with busy polling enabled, to eventually catch regressions.
Thanks.
Powered by blists - more mailing lists