lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150310181720.GB13155@casper.infradead.org>
Date:	Tue, 10 Mar 2015 18:17:20 +0000
From:	Thomas Graf <tgraf@...g.ch>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	josh@...htriplett.org,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH 2/2] rhashtable: Add arbitrary rehash function

On 03/10/15 at 09:27am, Herbert Xu wrote:
> This patch adds a rehash function that supports the use of any
> hash function for the new table.  This is needed to support changing
> the random seed value during the lifetime of the hash table.
> 
> However for now the random seed value is still constant and the
> rehash function is simply used to replace the existing expand/shrink
> functions.
> 
> Signed-off-by: Herbert Xu <herbert.xu@...hat.com>

> +static int rhashtable_rehash_one(struct rhashtable *ht, unsigned old_hash)
> +	struct bucket_table *new_tbl = rht_dereference(ht->future_tbl, ht);
> +	struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
> +	struct rhash_head __rcu **pprev = &old_tbl->buckets[old_hash];

[...]

I absolutely love the simplification this brings. Looks great. I now
also understand what you meant with entry for entry rehashing.

> +static void rhashtable_rehash(struct rhashtable *ht,
> +			      struct bucket_table *new_tbl)
>  {
> -	ASSERT_BUCKET_LOCK(ht, new_tbl, new_hash);
> +	struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
> +	unsigned old_hash;
> +
> +	get_random_bytes(&new_tbl->hash_rnd, sizeof(new_tbl->hash_rnd));
> +
> +	/* Make insertions go into the new, empty table right away. Deletions
> +	 * and lookups will be attempted in both tables until we synchronize.
> +	 * The synchronize_rcu() guarantees for the new table to be picked up
> +	 * so no new additions go into the old table while we relink.
> +	 */
> +	rcu_assign_pointer(ht->future_tbl, new_tbl);

I think you need an rcu_synchronize() here. rhashtable_remove() must
be guaranteed to either see tbl != old_tbl and thus consider both
tables or the rehashing must be guaranteed to not start for an already
started removal. 

> -static void __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
> -				struct bucket_table *tbl,
> -				const struct bucket_table *old_tbl, u32 hash)
> +bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
> +			 bool (*compare)(void *, void *), void *arg)

Why make this non-static? We already have rhashtable_lookup_compare_insert()
exported.

> +bool __rhashtable_remove(struct rhashtable *ht, struct bucket_table *tbl,
> +			 struct rhash_head *obj)

Same here.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ