lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 02 Mar 2015 20:02:05 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	fw@...len.de, netdev@...r.kernel.org, johannes@...solutions.net,
	linux-wireless@...r.kernel.org
Subject: Re: [PATCH RFC 00/14] shrink skb cb to 44 bytes

On Mon, 2015-03-02 at 17:17 -0500, David Miller wrote:
> From: Eric Dumazet <eric.dumazet@...il.com>
> Date: Mon, 02 Mar 2015 11:49:23 -0800
> 
> > Size of skb->cb[] is not the major factor. Trying to gain 4 or 8 bytes
> > is not going to improve performance a lot.
> > 
> > The real problem is that we clear it in skb_alloc()/build_skb(), instead
> > of each layer doing so at demand, and only the part that matters for
> > this layer.
> > 
> > Basically, skb->cb[] could be 80 or 160 bytes instead of 48, and we
> > should not care, as long as no layer does a stupid/lazy 
> > 
> > memset(skb->cb, 0, sizeof(skb->cb))
> > 
> > Presumably skb_clone() could be extended to receive the length of
> > skb->cb[] that current layer cares about.
> 
> Regardless, I think Florian's work has value.

Of course. I hope my answer was not implying the contrary !

48 -> 44 cb change, with the 8 bytes alignment and various __packed
tricks that might confuse compilers on some arches, will be hard to
quantify in term of performances on all arches.

About the GRO layout change, reason why 'struct sk_buff *last;' is at
the end of struct napi_gro_cb is that this field is not used in fast
path.

Note : We could try to use one bit in skb to advertise zero shinfo(skb).

Many skbs have a zeroed shinfo() (but shinfo->dataref == 1) , and
dereferencing skb_shinfo adds a cache line miss. 

-> We could avoid memset(shinfo, 0, offsetof(struct skb_shared_info,
dataref)) & atomic_set(&shinfo->dataref, 1); 

 in alloc_skb() and friends completely.

Unfortunately this kind of change would be quite invasive...




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ