lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <57A94D7A.6080203@hpe.com>
Date:	Mon, 8 Aug 2016 23:26:50 -0400
From:	Waiman Long <waiman.long@....com>
To:	Christoph Lameter <cl@...ux.com>
CC:	Tejun Heo <tj@...nel.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Jan Kara <jack@...e.com>,
	Jeff Layton <jlayton@...chiereds.net>,
	"J. Bruce Fields" <bfields@...ldses.org>,
	<linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Andi Kleen <andi@...stfloor.org>,
	Dave Chinner <dchinner@...hat.com>,
	Boqun Feng <boqun.feng@...il.com>,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>
Subject: Re: [PATCH v4 0/5] vfs: Use dlock list for SB's s_inodes list

On 07/27/2016 11:12 AM, Christoph Lameter wrote:
> On Mon, 25 Jul 2016, Tejun Heo wrote:
>
>> I don't get it.  What's the harm of using percpu memory here?  Other
>> percpu data structures have remote access too.  They're to a lower
>> degree but I don't see a clear demarcation line and making addtions
>> per-cpu seems to have significant benefits here.  If there's a better
>> way of splitting the list and locking, sure, let's try that but short
>> of that I don't see anything wrong with doing this per-cpu.
> For the regular global declarations we have separate areas for "SHARED"
> per cpu data like this. See DECLARE_PER_CPU_SHARED* and friends.
>
> Even if you align a percpu_alloc() there is still the possibility that
> other percpu variables defined after this will suffer from aliasing.
> The aligning causes space to be wasted for performance critical areas
> where you want to minimize cache line usage. The variables cannot be
> packed as densely as before. I think allocations like this need to be
> separate. Simply allocate an array of these structs using
>
> 	kcalloc(nr_cpu_ids, sizeof(my_struct), GFP_KERNEL)?
>
> Why bother with percpu_alloc() if its not per cpu data?
>
> Well if we do not care about that detail that much then lets continue going down this patch.
>

I think that make sense. The various lists don't really need to be in 
the percpu area. Allocated as an array may increase contention a bit 
when multiple CPUs try to access the list heads that happen to be in the 
same cacheline. However, it can speed up dlock list iterations as less 
cachelines need to be traversed. I will make the change to allocate the 
head array using kcalloc instead of using the percpu_alloc.

Thanks for the suggestion.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ