lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iL=j38rdsKhAm8_4pMbf=vyAZ8SVoUkUgEVUF0GEXRwRg@mail.gmail.com>
Date:   Thu, 12 Nov 2020 15:38:42 +0100
From:   Eric Dumazet <edumazet@...gle.com>
To:     Björn Töpel <bjorn.topel@...il.com>
Cc:     netdev <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Björn Töpel <bjorn.topel@...el.com>,
        magnus.karlsson@...el.com, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        maciej.fijalkowski@...el.com,
        "Samudrala, Sridhar" <sridhar.samudrala@...el.com>,
        Jesse Brandeburg <jesse.brandeburg@...el.com>,
        qi.z.zhang@...el.com, Jakub Kicinski <kuba@...nel.org>,
        Jonathan Lemon <jonathan.lemon@...il.com>, maximmi@...dia.com
Subject: Re: [PATCH bpf-next 1/9] net: introduce preferred busy-polling

On Thu, Nov 12, 2020 at 12:41 PM Björn Töpel <bjorn.topel@...il.com> wrote:
>
> From: Björn Töpel <bjorn.topel@...el.com>
>
> The existing busy-polling mode, enabled by the SO_BUSY_POLL socket
> option or system-wide using the /proc/sys/net/core/busy_read knob, is
> an opportunistic. That means that if the NAPI context is not
> scheduled, it will poll it. If, after busy-polling, the budget is
> exceeded the busy-polling logic will schedule the NAPI onto the
> regular softirq handling.
>
> One implication of the behavior above is that a busy/heavy loaded NAPI
> context will never enter/allow for busy-polling. Some applications
> prefer that most NAPI processing would be done by busy-polling.
>
> This series adds a new socket option, SO_PREFER_BUSY_POLL, that works
> in concert with the napi_defer_hard_irqs and gro_flush_timeout
> knobs. The napi_defer_hard_irqs and gro_flush_timeout knobs were
> introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
> feature"), and allows for a user to defer interrupts to be enabled and
> instead schedule the NAPI context from a watchdog timer. When a user
> enables the SO_PREFER_BUSY_POLL, again with the other knobs enabled,
> and the NAPI context is being processed by a softirq, the softirq NAPI
> processing will exit early to allow the busy-polling to be performed.
>
> If the application stops performing busy-polling via a system call,
> the watchdog timer defined by gro_flush_timeout will timeout, and
> regular softirq handling will resume.
>
> In summary; Heavy traffic applications that prefer busy-polling over
> softirq processing should use this option.
>
> Example usage:
>
>   $ echo 2 | sudo tee /sys/class/net/ens785f1/napi_defer_hard_irqs
>   $ echo 200000 | sudo tee /sys/class/net/ens785f1/gro_flush_timeout
>
> Note that the timeout should be larger than the userspace processing
> window, otherwise the watchdog will timeout and fall back to regular
> softirq processing.
>
> Enable the SO_BUSY_POLL/SO_PREFER_BUSY_POLL options on your socket.
>
> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>

...

> diff --git a/net/core/sock.c b/net/core/sock.c
> index 727ea1cc633c..248f6a763661 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -1159,6 +1159,12 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
>                                 sk->sk_ll_usec = val;
>                 }
>                 break;
> +       case SO_PREFER_BUSY_POLL:
> +               if (valbool && !capable(CAP_NET_ADMIN))
> +                       ret = -EPERM;
> +               else
> +                       sk->sk_prefer_busy_poll = valbool;

                            WRITE_ONCE(sk->sk_prefer_busy_poll, valbool);

So that KCSAN is happy while readers read this field while socket is not locked.

> +               break;
>  #endif
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ