lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 16 Jun 2021 13:42:17 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Amit Klein' <aksecurity@...il.com>
CC:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        Eric Dumazet <edumazet@...gle.com>, Willy Tarreau <w@....eu>,
        "David S. Miller" <davem@...emloft.net>,
        Sasha Levin <sashal@...nel.org>
Subject: RE: [PATCH 5.4 175/244] inet: use bigger hash table for IP ID
 generation

From: Amit Klein
> Sent: 16 June 2021 14:20
> 
> On Wed, Jun 16, 2021 at 1:19 PM David Laight <David.Laight@...lab.com> wrote:
> 
> > Can someone explain why this is a good idea for a 'normal' system?
> 
> This patch mitigates some techniques that leak internal state due to
> table hash collisions.

As you say below it isn't really effective...

> > Why should my desktop system 'waste' 2MB of memory on a massive
> > hash table that I don't need.
> 
> In the patch's defense, it only consumes 2MB when the physical RAM is >= 16GB.

Right, but I've 16GB of memory because I run some very large
applications that need all the memory they can get (and more).

In any case, for many architectures 8GB is 'entry level'.
That includes some or the 'small' ARM cpus.
They are unlikely to have tens of connections, never mind
thousands.

> > It might be needed by systems than handle massive numbers
> > of concurrent connections - but that isn't 'most systems'.
> >
> > Surely it would be better to detect when the number of entries
> > is comparable to the table size and then resize the table.
> 
> Security-wise, this approach is not effective. The table size was
> increased to reduce the likelihood of hash collisions. These start
> happening when you have ~N^(1/2) elements (for table size N), so
> you'll need to resize pretty quickly anyway.

You really have to live with some hash collisions.
Resizing at N^(1/2) just doesn't scale.
Not only that the hash function will stop generating non-colliding
values once the table starts getting big.
(A very expensive hash function might help - but i wouldn't count on it.)

In any case, the chance of hitting a hash collision is slightly
less than (table_size / active_entries).

What you want to avoid is the ELF symbol hash when the average
chain length is about 1.5 and you have to separately check the hash
table of every shared library.
That quickly balloons to a lot of string compares.
(I looked at that for mozilla may years ago - was horrid.)

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ