[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f49d319-bd12-4e81-9516-afd1f1a1d345@intel.com>
Date: Tue, 3 Dec 2024 12:01:16 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Daniel Xu <dxu@...uu.xyz>, Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
Lorenzo Bianconi <lorenzo@...nel.org>, "bpf@...r.kernel.org"
<bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, John Fastabend
<john.fastabend@...il.com>, Jesper Dangaard Brouer <hawk@...nel.org>, "Martin
KaFai Lau" <martin.lau@...ux.dev>, David Miller <davem@...emloft.net>, "Eric
Dumazet" <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
<netdev@...r.kernel.org>
Subject: Re: [RFC/RFT v2 0/3] Introduce GRO support to cpumap codebase
From: Jakub Kicinski <kuba@...nel.org>
Date: Mon, 2 Dec 2024 14:47:39 -0800
> On Tue, 26 Nov 2024 11:36:53 +0100 Alexander Lobakin wrote:
>>> tcp_rr results were unaffected.
>>
>> @ Jakub,
>
> Context? What doesn't work and why?
My tests show the same perf as on Lorenzo's series, but I test with UDP
trafficgen. Daniel tests TCP and the results are much worse than with
Lorenzo's implementation.
I suspect this is related to that how NAPI performs flushes / decides
whether to repoll again or exit vs how kthread does that (even though I
also try to flush only every 64 frames or when the ring is empty). Or
maybe to that part of the kthread happens in process context outside any
softirq, while when using NAPI, the whole loop is inside RX softirq.
Jesper said that he'd like to see cpumap still using own kthread, so
that its priority can be boosted separately from the backlog. That's why
we asked you whether it would be fine to have cpumap as threaded NAPI in
regards to all this :D
Thanks,
Olek
Powered by blists - more mailing lists