[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <qvqweduae55u.fsf@dev0134.prn3.facebook.com>
Date: Thu, 10 Nov 2022 15:36:34 -0800
From: Stefan Roesch <shr@...kernel.io>
To: Jakub Kicinski <kuba@...nel.org>
Cc: kernel-team@...com, axboe@...nel.dk, olivier@...llion01.com,
netdev@...r.kernel.org, io-uring@...r.kernel.org
Subject: Re: [RFC PATCH v2 1/2] io_uring: add napi busy polling support
Jakub Kicinski <kuba@...nel.org> writes:
> On Mon, 7 Nov 2022 09:52:39 -0800 Stefan Roesch wrote:
>> This adds the napi busy polling support in io_uring.c. It adds a new
>> napi_list to the io_ring_ctx structure. This list contains the list of
>> napi_id's that are currently enabled for busy polling. The list is
>> synchronized by the new napi_lock spin lock. The current default napi
>> busy polling time is stored in napi_busy_poll_to. If napi busy polling
>> is not enabled, the value is 0.
>>
>> The busy poll timeout is also stored as part of the io_wait_queue. This
>> is necessary as for sq polling the poll interval needs to be adjusted
>> and the napi callback allows only to pass in one value.
>>
>> Testing has shown that the round-trip times are reduced to 38us from
>> 55us by enabling napi busy polling with a busy poll timeout of 100us.
>
> What's the test, exactly? What's the network latency? Did you busy poll
> on both ends?
>
The test programs are part of the liburing patches. They consist of a
client and server program. The client sends a request, which has a timestamp
in its payload and the server replies with the same payload. The client
calculates the roundtrip time and stores it to calcualte the results.
The client is running on host1 and the server is running on host 2. The
measured times below are roundtrip times. These are average times over
10 runs each.
If no napi busy polling wait is used : 55us
If napi with client busy polling is used : 44us
If napi busy polling is used on the client and server: 38us
If you think the numbers are not that useful, I can remove them from the
commit message.
> I reckon we should either find a real application or not include any
> numbers. Most of the quoted win likely comes from skipping IRQ
> coalescing. Which can just be set lowered if latency of 30usec is
> a win in itself..
>
> Would it be possible to try to integrate this with Jonathan's WIP
> zero-copy work? I presume he has explicit NAPI/queue <> io_uring
> instance mapping which is exactly the kind of use case we should
> make a first-class citizen here.
>
I'll have a look at Jonathan's patches.
>> + spin_lock(&ctx->napi_lock);
>> + list_for_each_entry(ne, &ctx->napi_list, list) {
>> + if (ne->napi_id == napi_id) {
>> + ne->timeout = jiffies + NAPI_TIMEOUT;
>
> What's the NAPI_TIMEOUT thing? I don't see it mentioned in
> the commit msg.
>
To make sure that the napi id's are cleaned up, they have a timeout. The
function io_napi_check_entry_timeout checks if the timeout expired. This
has been added to make sure the list does not grow without bound.
>> + list_for_each_entry_safe(ne, n, napi_list, list) {
>> + napi_busy_loop(ne->napi_id, NULL, NULL, true, BUSY_POLL_BUDGET);
>
> You can't opt the user into prefer busy poll without the user asking
> for it. Default to false and add an explicit knob like patch 2.
>
The above code is from the function io_napi_blocking_busy_loop().
However this function is only called when a busy poll timeout has been
configured.
#ifdef CONFIG_NET_RX_BUSY_POLL
if (iowq.busy_poll_to)
io_napi_blocking_busy_loop(&local_napi_list, &iowq);
However we don't have that check for sqpoll, so we should add a check
for the napi busy poll timeout in __io_sq_thread.
Do we really need a knob to store if napi busy polling is enabled or is
sufficent to store a napi busy poll timeout value?
>> timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
>> }
>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>> + else if (!list_empty(&local_napi_list)) {
>> + iowq.busy_poll_to = READ_ONCE(ctx->napi_busy_poll_to);
>> + }
>> +#endif
>
> You don't have to break the normal bracket placement for an ifdef:
>
> if (something) {
> boring_code();
>
> #ifdef CONFIG_WANT_CHEESE
> } else if (is_gouda) {
> /* mmm */
> nom_nom();
> #endif
> }
I'll fix the above with the next version of the patch.
'
Powered by blists - more mailing lists