lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Jul 2018 02:31:58 +0000
From:   "Li,Rongqing" <lirongqing@...du.com>
To:     Eric Dumazet <eric.dumazet@...il.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: 答复: [PATCH] net: convert gro_count to bitmask



> -----邮件原件-----
> 发件人: Eric Dumazet [mailto:eric.dumazet@...il.com]
> 发送时间: 2018年7月11日 19:32
> 收件人: Li,Rongqing <lirongqing@...du.com>; netdev@...r.kernel.org
> 主题: Re: [PATCH] net: convert gro_count to bitmask
> 
> 
> 
> On 07/11/2018 02:15 AM, Li RongQing wrote:
> > gro_hash size is 192 bytes, and uses 3 cache lines, if there is few
> > flows, gro_hash may be not fully used, so it is unnecessary to iterate
> > all gro_hash in napi_gro_flush(), to occupy unnecessary cacheline.
> >
> > convert gro_count to a bitmask, and rename it as gro_bitmask, each bit
> > represents a element of gro_hash, only flush a gro_hash element if the
> > related bit is set, to speed up napi_gro_flush().
> >
> > and update gro_bitmask only if it will be changed, to reduce cache
> > update
> >
> > Suggested-by: Eric Dumazet <edumazet@...gle.com>
> > Signed-off-by: Li RongQing <lirongqing@...du.com>
> > ---
> >  include/linux/netdevice.h |  2 +-
> >  net/core/dev.c            | 35 +++++++++++++++++++++++------------
> >  2 files changed, 24 insertions(+), 13 deletions(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index b683971e500d..df49b36ef378 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -322,7 +322,7 @@ struct napi_struct {
> >
> >  	unsigned long		state;
> >  	int			weight;
> > -	unsigned int		gro_count;
> > +	unsigned long		gro_bitmask;
> >  	int			(*poll)(struct napi_struct *, int);
> >  #ifdef CONFIG_NETPOLL
> >  	int			poll_owner;
> > diff --git a/net/core/dev.c b/net/core/dev.c index
> > d13cddcac41f..a08dbdd217a6 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -5171,9 +5171,11 @@ static void __napi_gro_flush_chain(struct
> napi_struct *napi, u32 index,
> >  			return;
> >  		list_del_init(&skb->list);
> >  		napi_gro_complete(skb);
> > -		napi->gro_count--;
> >  		napi->gro_hash[index].count--;
> >  	}
> > +
> > +	if (!napi->gro_hash[index].count)
> > +		clear_bit(index, &napi->gro_bitmask);
> 
> I suggest you not add an atomic operation here.
> 
> Current cpu owns this NAPI after all.
> 
> Same remark for the whole patch.
> 
> ->  __clear_bit(), __set_bit() and similar operators
> 
> Ideally you should provide TCP_RR number with busy polling enabled, to
> eventually catch regressions.
> 

I will change and do the test
Thank you.

-RongQing

> Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ