lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Mar 2015 10:27:01 +0000
From:	Patrick McHardy <kaber@...sh.net>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	Thomas Graf <tgraf@...g.ch>, David Miller <davem@...emloft.net>,
	netdev@...r.kernel.org, Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [v1 PATCH 7/14] netfilter: Use rhashtable_lookup instead of
 lookup_compare

On 20.03, Herbert Xu wrote:
> On Fri, Mar 20, 2015 at 09:59:09AM +0000, Patrick McHardy wrote:
> >
> > Regarding the chain length as trigger - I'm sorry, but this doesn't work
> > for us. I don't see why you would have to look at chain length. That
> > implies that you don't trust your hash function - why not fix that
> > instead?
> 
> Any hash function can be attacked.  That's why we need to be able
> to rehash it.  And the best way to decide when to rehash is based
> on chain length (otherwise you'd waste time rehashing periodically
> like we used to do).  With name spaces these days anyone could be
> an adversary.

We already had this discussion. I strongly do not believe this is
the right way to fix namespace problems. There are millions of ways
of creating CPU intensive workloads. You need to be able to put
bounds on the entire namespace. Fixing individual spots will not
solve that problem.

> Besides, putting multiple objects with the same key into a hash
> table defeats the whole point of hashing.

They exist only for (very) short periods of time. Its simply not a
problem in our case. We could even put hard bounds on them, meaning
an element will at most exist twice during that period.

> > > Of course many hash table users need to be able to keep multiple
> > > objects under the same key.  My suggestion would be to allocate
> > > your own linked list and have the linked list be the object that
> > > is inserted into the hash table.
> > 
> > That would require a huge amount of extra memory per element and having
> > millions of them is not unrealistic for our use case.
> 
> You should be able to do it with just 8 extra bytes per unique
> hash table key.

That's something 25% more memory usage for us in common cases. We try
very hard to keep the active memory size small. I don't want to waste
that amount of memory just for the very short periods while transactions
are unconfirmed.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ