lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Mar 2009 14:59:46 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 12/14] fs: dcache per-bucket dcache hash locking

On Mon, Mar 30, 2009 at 02:47:35PM +0200, Andi Kleen wrote:
> On Mon, Mar 30, 2009 at 02:27:12PM +0200, Nick Piggin wrote:
> > It's interesting. I suspect that with the size of the dcache hash,
> > if we assume pretty random distribution of access patterns, then
> > it might be unlikely to get much common cache lines (ok, birthday
> 
> The problem is that you increase the cache foot print overall
> because these hash tables are gigantic. And because it's random
> there will not be much locality. That is your hash table
> might still fit when you're lucky, but then if the rest
> of your workload needs a lot of cache too you might
> end up with a cache miss on every access.

Hmm, I disagree in general because the hash table is so big, then
it is very unlikely to get much sharing whether or not we double
the size of it. Even if we only use a few dentries in the workload,
they will be scattered all over the table and each lookup will use
one cacheline regardless of the bucket head size.

Wheras if we have to go to another lock table each time, then we
have to touch 2 cachelines per lookup.

Actually I have patches floating around to be able to dynamically
resize the dcache hash table, and in that case it actually would
be able to make it very small and fit in cache for workloads that
don't have too many dentries.

But anyway let's not worry too much about this yet. I agree it
has downsides whatever direction we go, so we can discuss or
measure after the basics of the patchset are more mature.


> False sharing is not the issue with the big lock hash typically, that was 
> more as an issue for a potential separate hash table design 
> (I guess my original sentence was a bit confusing)
> 
> BTW the alternative would be to switch the hash table to some
> large fan out tree indexed by the string hash value and then use
> the standard lockless algorithms on that.

Well yes that's the other thing we could try.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ