lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Nov 2015 23:00:35 -0500
From:	Waiman Long <waiman.long@....com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Yuyang Du <yuyang.du@...el.com>
Subject: Re: [RFC PATCH 3/3] sched/fair: Use different cachelines for readers
 and writers of load_avg

On 11/30/2015 05:29 PM, Peter Zijlstra wrote:
> On Mon, Nov 30, 2015 at 02:13:32PM -0500, Waiman Long wrote:
>>> This would only work if the structure itself is allocated with cacheline
>>> alignment, and looking at sched_create_group(), we use a plain kzalloc()
>>> for this, which doesn't guarantee any sort of alignment beyond machine
>>> word size IIRC.
>> With a RHEL 6 derived .config file, the size of the task_group structure was
>> 460 bytes on a 32-bit x86 kernel. Adding a ____cacheline_aligned tag
>> increase the size to 512 bytes. So it did make the structure a multiple of
>> the cacheline size. With both slub and slab, the allocated task group
>> pointers from kzalloc() in sched_create_group() were all multiples of 0x200.
>> So they were properly aligned for the ____cacheline_aligned tag to work.
> Not sure we should rely on sl*b doing the right thing here.
> KMALLOC_MIN_ALIGN is explicitly set to sizeof(long long). If you want
> explicit alignment, one should use KMEM_CACHE().

I think the current kernel use power-of-2 kmemcaches to satisfy kalloc() 
requests except when the size is less than or equal to 192 where there 
are some non-power-of-2 kmemcaches available. Given that the task_group 
structure is large enough with FAIR_GROUP_SCHED enabled, we shouldn't 
hit the case that the allocated buffer is not cacheline aligned.

Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ