[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55917726-33d0-7a1f-ea4e-0ed0c76ee039@intel.com>
Date: Thu, 12 Nov 2020 15:43:01 +0100
From: Björn Töpel <bjorn.topel@...el.com>
To: Eric Dumazet <edumazet@...gle.com>,
Björn Töpel <bjorn.topel@...il.com>
Cc: netdev <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
magnus.karlsson@...el.com, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
maciej.fijalkowski@...el.com,
"Samudrala, Sridhar" <sridhar.samudrala@...el.com>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
qi.z.zhang@...el.com, Jakub Kicinski <kuba@...nel.org>,
Jonathan Lemon <jonathan.lemon@...il.com>, maximmi@...dia.com
Subject: Re: [PATCH bpf-next 1/9] net: introduce preferred busy-polling
On 2020-11-12 15:38, Eric Dumazet wrote:
> On Thu, Nov 12, 2020 at 12:41 PM Björn Töpel <bjorn.topel@...il.com> wrote:
>>
>> From: Björn Töpel <bjorn.topel@...el.com>
>>
>> The existing busy-polling mode, enabled by the SO_BUSY_POLL socket
>> option or system-wide using the /proc/sys/net/core/busy_read knob, is
>> an opportunistic. That means that if the NAPI context is not
>> scheduled, it will poll it. If, after busy-polling, the budget is
>> exceeded the busy-polling logic will schedule the NAPI onto the
>> regular softirq handling.
>>
>> One implication of the behavior above is that a busy/heavy loaded NAPI
>> context will never enter/allow for busy-polling. Some applications
>> prefer that most NAPI processing would be done by busy-polling.
>>
>> This series adds a new socket option, SO_PREFER_BUSY_POLL, that works
>> in concert with the napi_defer_hard_irqs and gro_flush_timeout
>> knobs. The napi_defer_hard_irqs and gro_flush_timeout knobs were
>> introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
>> feature"), and allows for a user to defer interrupts to be enabled and
>> instead schedule the NAPI context from a watchdog timer. When a user
>> enables the SO_PREFER_BUSY_POLL, again with the other knobs enabled,
>> and the NAPI context is being processed by a softirq, the softirq NAPI
>> processing will exit early to allow the busy-polling to be performed.
>>
>> If the application stops performing busy-polling via a system call,
>> the watchdog timer defined by gro_flush_timeout will timeout, and
>> regular softirq handling will resume.
>>
>> In summary; Heavy traffic applications that prefer busy-polling over
>> softirq processing should use this option.
>>
>> Example usage:
>>
>> $ echo 2 | sudo tee /sys/class/net/ens785f1/napi_defer_hard_irqs
>> $ echo 200000 | sudo tee /sys/class/net/ens785f1/gro_flush_timeout
>>
>> Note that the timeout should be larger than the userspace processing
>> window, otherwise the watchdog will timeout and fall back to regular
>> softirq processing.
>>
>> Enable the SO_BUSY_POLL/SO_PREFER_BUSY_POLL options on your socket.
>>
>> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
>
> ...
>
>> diff --git a/net/core/sock.c b/net/core/sock.c
>> index 727ea1cc633c..248f6a763661 100644
>> --- a/net/core/sock.c
>> +++ b/net/core/sock.c
>> @@ -1159,6 +1159,12 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
>> sk->sk_ll_usec = val;
>> }
>> break;
>> + case SO_PREFER_BUSY_POLL:
>> + if (valbool && !capable(CAP_NET_ADMIN))
>> + ret = -EPERM;
>> + else
>> + sk->sk_prefer_busy_poll = valbool;
>
> WRITE_ONCE(sk->sk_prefer_busy_poll, valbool);
>
> So that KCSAN is happy while readers read this field while socket is not locked.
>
Thanks Eric, I'll fix that!
Also, in patch 5, READ_ONCE is missing. I'll address that as well.
Powered by blists - more mailing lists