[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d2e781e-b26d-4cf0-0178-25b8835dbe26@intel.com>
Date: Mon, 7 Sep 2020 15:37:40 +0200
From: Björn Töpel <bjorn.topel@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>,
Björn Töpel <bjorn.topel@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>, ast@...nel.org,
daniel@...earbox.net, netdev@...r.kernel.org, bpf@...r.kernel.org,
magnus.karlsson@...el.com, davem@...emloft.net,
john.fastabend@...il.com, intel-wired-lan@...ts.osuosl.org
Subject: Re: [PATCH bpf-next 0/6] xsk: exit NAPI loop when AF_XDP Rx ring is
full
On 2020-09-05 01:58, Jakub Kicinski wrote:
> On Fri, 4 Sep 2020 16:32:56 +0200 Björn Töpel wrote:
>> On 2020-09-04 16:27, Jesper Dangaard Brouer wrote:
>>> On Fri, 4 Sep 2020 15:53:25 +0200
>>> Björn Töpel <bjorn.topel@...il.com> wrote:
>>>
>>>> On my machine the "one core scenario Rx drop" performance went from
>>>> ~65Kpps to 21Mpps. In other words, from "not usable" to
>>>> "usable". YMMV.
>>>
>>> We have observed this kind of dropping off an edge before with softirq
>>> (when userspace process runs on same RX-CPU), but I thought that Eric
>>> Dumazet solved it in 4cd13c21b207 ("softirq: Let ksoftirqd do its
job").
>>>
>>> I wonder what makes AF_XDP different or if the problem have come back?
>>>
>>
>> I would say this is not the same issue. The problem is that the softirq
>> is busy dropping packets since the AF_XDP Rx is full. So, the cycles
>> *are* split 50/50, which is not what we want in this case. :-)
>>
>> This issue is more of a "Intel AF_XDP ZC drivers does stupid work", than
>> fairness. If the Rx ring is full, then there is really no use to let the
>> NAPI loop continue.
>>
>> Would you agree, or am I rambling? :-P
>
> I wonder if ksoftirqd never kicks in because we are able to discard
> the entire ring before we run out of softirq "slice".
>
This is exactly what's happening, so we're entering a "busy poll like"
behavior; syscall, return from syscall softirq/napi, userland.
>
> I've been pondering the exact problem you're solving with Maciej
> recently. The efficiency of AF_XDP on one core with the NAPI processing.
>
> Your solution (even though it admittedly helps, and is quite simple)
> still has the application potentially not able to process packets
> until the queue fills up. This will be bad for latency.
>
> Why don't we move closer to application polling? Never re-arm the NAPI
> after RX, let the application ask for packets, re-arm if 0 polled.
> You'd get max batching, min latency.
>
> Who's the rambling one now? :-D
>
:-D No, these are all very good ideas! We've actually experimented
with it with the busy-poll series a while back -- NAPI busy-polling
does exactly "application polling".
However, I wonder if the busy-polling would have better performance
than the scenario above (i.e. when the ksoftirqd never kicks in)?
Executing the NAPI poll *explicitly* in the syscall, or implicitly
from the softirq.
Hmm, thinking out loud here. A simple(r) patch enabling busy poll;
Exporting the napi_id to the AF_XDP socket (xdp->rxq->napi_id to
sk->sk_napi_id), and do the sk_busy_poll_loop() in sendmsg.
Or did you have something completely different in mind?
As for this patch set, I think it would make sense to pull it in since
it makes the single-core scenario *much* better, and it is pretty
simple. Then do the application polling as another, potentially,
improvement series.
Thoughts? Thanks a lot for the feedback!
Björn
Powered by blists - more mailing lists