lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 6 Jan 2012 13:29:45 +0000
From:	Ian Campbell <Ian.Campbell@...rix.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	David Miller <davem@...emloft.net>
Subject: Re: [PATCH 2/6] net: pad skb data and shinfo as a whole rather than
 individually

On Fri, 2012-01-06 at 12:33 +0000, Eric Dumazet wrote:
> Le vendredi 06 janvier 2012 à 11:20 +0000, Ian Campbell a écrit :
> 
> > It doesn't fit in a single cache line today.
> 
> It really does, thanks to your (net: pack skb_shared_info more
> efficiently) previous patch.
> 
> I dont understand your numbers, very hard to read.
> 
> Current net-next :
> 
> offsetof(struct skb_shared_info, nr_frags)=0x0
> offsetof(struct skb_shared_info, frags[1])=0x40   (0x30 on 32bit arches)
> 
> So _all_ fields, including one frag, are included in a single cache line
> on most machines (64-bytes cache line),

BTW, this is also true with my patch + put destructor_arg first in the
struct (at least for all interesting fields, since I chose
destructor_arg specifically because it did not seem interesting for
these purposes -- do you disagree?)

(gdb) print &((struct skb_shared_info *)0)->frags[1]
$1 = (skb_frag_t *) 0x48
but there is a cacheline boundary just before nr_frags:
(gdb) print &((struct skb_shared_info *)0)->nr_frags
$3 = (unsigned char *) 0x8

So the interesting fields total 0x48-0x8 = 0x40 bytes and the alignment
is such that this is a single cache line.

>  IF struct skb_shared_info is
> aligned.

Obviously the conditions for the above are a little different but they
are, AFAIK, met.

Ian.

> 
> Your patch obviously breaks this on 64bit arches, unless you make sure
> sizeof(struct skb_shared_info) is a multiple of cache line.
> 
> [BTW, it is incidentaly the case after your 1/6 patch]
> 
> fields reordering is not going to change anything on this.
> 
> Or maybe I misread your patch ?
> 
> At least you claimed in Changelog : 
> 
> <quote>
>  Reducing this overhead means that sometimes the tail end of
>  the data can end up in the same cache line as the beginning of the shinfo but
>  in many cases the allocation slop means that there is no overlap.
> </quote>
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ