lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070305044200.GA31098@wotan.suse.de>
Date:	Mon, 5 Mar 2007 05:42:00 +0100
From:	Nick Piggin <npiggin@...e.de>
To:	David Miller <davem@...emloft.net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [rfc][patch] dynamic resizing dentry hash using RCU

On Mon, Mar 05, 2007 at 05:27:24AM +0100, Nick Piggin wrote:
> On Sun, Mar 04, 2007 at 08:11:42PM -0800, David Miller wrote:
> > One minor nit:
> > 
> > > +struct dentry_hash {
> > > +	unsigned int shift;
> > > +	unsigned long mask;
> > > +	struct hlist_head *table;
> > > +};
> > 
> > I don't see any reason to make 'mask' an unsigned long
> > and this makes this structure take up 8 bytes more than
> > necessary on 64-bit.
> 
> You're right, the mask is currently just an int, so my patch should
> not be messing with that. Thanks.
> 
> The other thing you'll have to be careful of when looking at doing
> an implementation, is that I think I forgot to use the RCU accessors
> (rcu_assign_pointer, rcu_dereference) when assigning and loading
> the new/old hash table pointers.

Also, now that I think of it, if resize performance is far far less
important than read-side performance, then you should be able to get
away without the additional smp_rmb() that I add to the read-side
in the algorithm I described.

All you have to do is introduce an additional RCU grace period between
assigning the old pointer to the current table, and the current pointer
to the new table. This will still ensure that an (old_table == NULL &&
cur_table == new_table) condition is impossible (that condition would
mean that one of our active tables is invisible to the reader).

This should truely reduce read-side fastpath overhead to just a single,
predictable branch, an extra read_mostly load or two and the rcu_blah()
bits.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ