lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Jun 2010 01:00:23 +1000
From:	Nick Piggin <npiggin@...e.de>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	John Stultz <johnstul@...ibm.com>,
	Frank Mayhar <fmayhar@...gle.com>
Subject: Re: [patch 06/52] fs: scale files_lock

On Thu, Jun 24, 2010 at 09:52:17AM +0200, Peter Zijlstra wrote:
> On Thu, 2010-06-24 at 13:02 +1000, npiggin@...e.de wrote:
> > 
> > One difficulty with this approach is that a file can be removed from the list
> > by another CPU. We must track which per-cpu list the file is on.  Scalability
> > could suffer if files are frequently removed from different cpu's list.
> 
> 
> Is this really a lot less complex than what I did with my fine-grained
> locked list?

http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg115071.html

Honestly the filevec code seemed overkill to me, and yes it was a bit
complex. The only reason to consider it AFAIKS would be if the space
overhead of the per-cpu structures, or the slowpath cost of the brlock
was unbearable.

filevecs probably dont perform as well in the fastpath. My patch doesn't
add any atomics. The cost of adding or removing a file from its list are
one atomic for the spinlock.

The cost of adding a file with filevecs is a spinlock to put it on the
vec, a spinlock to take it off the vec, a spinlock to put it on the
lock-list. 3 atomics. A heap more icache and branches.

Removing a file with filevecs is a spinlock to check the vec, and 1 or 2
spinlocks to take it off the list (common case).

Scalability will be improved, but it will hit the global list still
1/15th times (and there is even no lock batching on the list but I
assume that could be fixed). Compared with never for my patch (unless
there is a cross-CPU removal, in which case they both need to hit a
remote-CPU cacheline).

But before we even get to scalability, I think filevecs from complexity
and single threaded performance point already lose.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ