lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 6 Jan 2012 13:20:01 +0000
From:	Ian Campbell <Ian.Campbell@...rix.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	David Miller <davem@...emloft.net>
Subject: Re: [PATCH 2/6] net: pad skb data and shinfo as a whole rather than
 individually

On Fri, 2012-01-06 at 12:33 +0000, Eric Dumazet wrote:
> Le vendredi 06 janvier 2012 à 11:20 +0000, Ian Campbell a écrit :
> 
> > It doesn't fit in a single cache line today.
> 
> It really does, thanks to your (net: pack skb_shared_info more
> efficiently) previous patch.
> 
> I dont understand your numbers, very hard to read.
> 
> Current net-next :
> 
> offsetof(struct skb_shared_info, nr_frags)=0x0
> offsetof(struct skb_shared_info, frags[1])=0x40   (0x30 on 32bit arches)

I see 0x48 here at cset 6386994e03ebbe60338ded3d586308a41e81c0dc:
(gdb) print &((struct skb_shared_info *)0)->nr_frags
$1 = (short unsigned int *) 0x0
(gdb) print &((struct skb_shared_info *)0)->frags[1]
$2 = (skb_frag_t *) 0x48
(gdb) print &((struct skb_shared_info *)0)->frags[0]
$3 = (skb_frag_t *) 0x38

(it's 0x34 and 0x2c on 32 bit) and these numbers match what I posted for
v3.1 (and I imagine earlier since this stuff doesn't seem to change very
often).

I provided the offsets of each field in struct skb_shared_info, which
one do you think is wrong?

Remember that the end shared info is explicitly aligned (to the end of
the allocation == a cache line) while the front just ends up at wherever
the size dictates so you can't measure the alignment of the fields from
the beginning  of the struct, you need to measure from the end.

Ian.

> So _all_ fields, including one frag, are included in a single cache line
> on most machines (64-bytes cache line), IF struct skb_shared_info is
> aligned.
> 
> Your patch obviously breaks this on 64bit arches, unless you make sure
> sizeof(struct skb_shared_info) is a multiple of cache line.
> 
> [BTW, it is incidentaly the case after your 1/6 patch]
> 
> fields reordering is not going to change anything on this.
> 
> Or maybe I misread your patch ?
> 
> At least you claimed in Changelog : 
> 
> <quote>
>  Reducing this overhead means that sometimes the tail end of
>  the data can end up in the same cache line as the beginning of the shinfo but
>  in many cases the allocation slop means that there is no overlap.
> </quote>
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ