lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 1 Oct 2022 23:15:29 +0200
From:   Willy Tarreau <w@....eu>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     "David S . Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        netdev <netdev@...r.kernel.org>,
        Eric Dumazet <edumazet@...gle.com>,
        Christophe Leroy <christophe.leroy@...roup.eu>
Subject: Re: [PATCH net-next] once: add DO_ONCE_SLOW() for sleepable contexts

Hi Eric,

On Sat, Oct 01, 2022 at 01:51:02PM -0700, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@...gle.com>
> 
> Christophe Leroy reported a ~80ms latency spike
> happening at first TCP connect() time.

Seeing Christophe's message also made me wonder if we didn't break
something back then :-/

> This is because __inet_hash_connect() uses get_random_once()
> to populate a perturbation table which became quite big
> after commit 4c2c8f03a5ab ("tcp: increase source port perturb table to 2^16")
> 
> get_random_once() uses DO_ONCE(), which block hard irqs for the duration
> of the operation.
> 
> This patch adds DO_ONCE_SLOW() which uses a mutex instead of a spinlock
> for operations where we prefer to stay in process context.

That's a nice improvement I think. I was wondering if, for this special
case, we *really* need an exclusive DO_ONCE(). I mean, we're getting
random bytes, we really do not care if two CPUs change them in parallel
provided that none uses them before the table is entirely filled. Thus
that could probably end up as something like:

    if (!atomic_read(&done)) {
        get_random_bytes(array);
        atomic_set(&done, 1);
    }

In any case, your solution remains cleaner and more robust, though.

Thanks,
Willy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ