lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Jun 2010 14:27:38 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	netdev@...r.kernel.org
Subject: Re: rt hash table / rt hash locks question

Le mercredi 16 juin 2010 à 20:46 +1000, Nick Piggin a écrit :
> I'm just converting this scalable dentry/inode hash table to a more
> compact form. I was previously using a dumb spinlock per bucket,
> but this doubles the size of the tables so isn't production quality.
> 

Yes, we had this in the past (one rwlock or spinlock per hash chain),
and it was not very good with LOCKDEP on.

> What I've done at the moment is to use a bit_spinlock in bit 0 of each
> list pointer of the table. Bit spinlocks are now pretty nice because
> we can do __bit_spin_unlock() which gives non-atomic store with release
> ordering, so it should be almost as fast as spinlock.
> 
> But I look at rt hash and it seems you use a small hash on the side
> for spinlocks. So I wonder, pros for each:
> 
> - bitlocks have effectively zero storage
    yes but a mask is needed to get head pointer. Special care also must
be taken when insert/delete a node in chain, keeping this bit set.

> - bitlocks hit the same cacheline that the hash walk hits.
    yes
> - in RCU list, locked hash walks usually followed by hash modification,
>   bitlock should have brought in the line for exclusive.
    But we usually perform a read only lookup, _then_ take the lock, to
perform a new lookup before insert. So at time we would take the
bitlock, cache line is in shared state. With spinlocks, we always use
the exclusive mode, but on a separate cache line...

> - bitlock number of locks scales with hash size
    Yes, but concurrency is more a function of online cpus, given we use
jhash. 

> - spinlocks may be slightly better at the cacheline level (bitops
>   sometimes require explicit load which may not acquire exclusive
>   line on some archs). On x86 ll/sc architectures, this shouldn't
>   be a problem.
    Yes, you can add fairness (if ticket spinlocks variant used), but on
route cache I really doubt it can make a difference.

> - spinlocks better debugging (could be overcome with a LOCKDEP
>   option to revert to spinlocks, but a bit ugly).
	Definitely a good thing.

> - in practice, contention due to aliasing in buckets to lock mapping
>   is probably fairly minor.
     Agreed
> 
> Net code is obviously tested and tuned well, but instinctively I would
> have tought bitlocks are the better way to go. Any comments on this?

Well, to be honest, this code is rather old, and at time I wrote it,
bitlocks were probably not available.

You can add :

- One downside of the hashed spinlocks is the X86_INTERNODE_CACHE_SHIFT
being 12 on X86_VSMP : All locks are probably in same internode block :(

- Another downside is all locks are currently on a single NUMA node,
since we kmalloc() them in one contiguous chunk.

So I guess it would be worth to try :)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ