lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 23 Feb 2015 15:07:20 -0800
From:	Alexei Starovoitov <alexei.starovoitov@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Thomas Graf <tgraf@...g.ch>,
	Josh Triplett <josh@...htriplett.org>,
	Herbert Xu <herbert@...dor.apana.org.au>,
	Patrick McHardy <kaber@...sh.net>,
	"ying.xue" <ying.xue@...driver.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	netfilter-devel <netfilter-devel@...r.kernel.org>
Subject: Re: Ottawa and slow hash-table resize

On Mon, Feb 23, 2015 at 2:34 PM, David Miller <davem@...emloft.net> wrote:
> From: Alexei Starovoitov <alexei.starovoitov@...il.com>
> Date: Mon, 23 Feb 2015 14:17:06 -0800
>
>> I'm not sure all of these counting optimizations will help at the end.
>> Say we have a small table. A lot of inserts are coming in at the same
>> time. rhashtable_expand kicks in and all new inserts are going
>> into future table, while expansion is happening.
>> Since expand will kick in quickly the old table will not have long
>> chains per bucket, so few unzips and corresponding
>> synchronize_rcu and we're done with expand.
>> Now future table becomes the only table, but it still has a lot
>> of entries, since insertions were happening and this table has
>> long per bucket chains, so next expand will have a lot of
>> synchronize_rcu and will take very long time.
>> So whether we count while inserting or not and whether
>> we grow by 2x or grow by 8x we still have an underlying
>> problem of very large number of synchronize_rcu.
>> Malicious user that knows this can stall the whole system.
>> Please tell me I'm missing something.
>
> This is why I have just suggested that we make inserts block,
> and the expander looks at the count of pending inserts in an
> effort to keep expanding the table further if necessary before
> releasing the blocked insertion threads.

yes. blocking inserts would solve the problem, but it will make
rhashtable applicability to be limited.
Also 'count of pending inserts' is going to be very small and
meaningless, since it will count the number of threads that
are sleeping on insert and not very useful to predict future
expansions.
As an alternative we can store two chains of pointers
per element (one chain for old table and another chain
for future table). That will avoid all unzipping logic at the cost
of extra memory. May be there are other solutions.
I think as a minimum something like 'blocking inserts' are
needed for stable. imo the current situation with nft_hash
is not just annoying, it's a bug.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ