lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090330122712.GF31000@wotan.suse.de>
Date:	Mon, 30 Mar 2009 14:27:12 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 12/14] fs: dcache per-bucket dcache hash locking

On Mon, Mar 30, 2009 at 02:14:08PM +0200, Andi Kleen wrote:
> npiggin@...e.de writes:
> 
> > We can turn the dcache hash locking from a global dcache_hash_lock into
> > per-bucket locking.
> 
> Per bucket locking is typically a bad idea because you get far too
> many locks and you increase cache footprint with all of them. It's
> typically better to use a second much smaller hash table that only has
> locks (by just shifting the hash value down some more) 
> Just need to be careful to avoid too much false sharing.

It's interesting. I suspect that with the size of the dcache hash,
if we assume pretty random distribution of access patterns, then
it might be unlikely to get much common cache lines (ok, birthday
paradox says we'll get a few common cachelines but how many?). So
then if we have to go to a 2nd lock hash table then that might
actually increase our cacheline footprint.

Of course RAM footprint will be more.

Anyway, I did think of this and it is something to discus in
future, but for now at least it is a demonstration of how it becomes
quite easy to change locking after we have broken the locking into
these components.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ