[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fcb3cf6d-7d5c-407b-aa20-63e2590cf56f@intel.com>
Date: Thu, 13 Mar 2025 17:50:33 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, Michal Kubiak
<michal.kubiak@...el.com>, Tony Nguyen <anthony.l.nguyen@...el.com>, "Przemek
Kitszel" <przemyslaw.kitszel@...el.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, "Alexei
Starovoitov" <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
"Jesper Dangaard Brouer" <hawk@...nel.org>, John Fastabend
<john.fastabend@...il.com>, Simon Horman <horms@...nel.org>,
<bpf@...r.kernel.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 09/16] idpf: remove SW marker handling from NAPI
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Date: Fri, 7 Mar 2025 12:42:10 +0100
> On Wed, Mar 05, 2025 at 05:21:25PM +0100, Alexander Lobakin wrote:
>> From: Michal Kubiak <michal.kubiak@...el.com>
>>
>> SW marker descriptors on completion queues are used only when a queue
>> is about to be destroyed. It's far from hotpath and handling it in the
>> hotpath NAPI poll makes no sense.
[...]
>> +/**
>> + * idpf_wait_for_sw_marker_completion - wait for SW marker of disabled Tx queue
>> + * @txq: disabled Tx queue
>> + */
>> +void idpf_wait_for_sw_marker_completion(struct idpf_tx_queue *txq)
>> +{
>> + struct idpf_compl_queue *complq = txq->txq_grp->complq;
>> + struct idpf_splitq_4b_tx_compl_desc *tx_desc;
>> + s16 ntc = complq->next_to_clean;
>> + unsigned long timeout;
>> + bool flow, gen_flag;
>> + u32 pos = ntc;
>> +
>> + if (!idpf_queue_has(SW_MARKER, txq))
>> + return;
>> +
>> + flow = idpf_queue_has(FLOW_SCH_EN, complq);
>> + gen_flag = idpf_queue_has(GEN_CHK, complq);
>> +
>> + timeout = jiffies + msecs_to_jiffies(IDPF_WAIT_FOR_MARKER_TIMEO);
>> + tx_desc = flow ? &complq->comp[pos].common : &complq->comp_4b[pos];
>> + ntc -= complq->desc_count;
>
> could we stop this logic? it was introduced back in the days as comparison
> against 0 for wrap case was faster, here as you said it doesn't have much
> in common with hot path.
+1
>
>> +
>> + do {
>> + struct idpf_tx_queue *tx_q;
>> + int ctype;
>> +
>> + ctype = idpf_parse_compl_desc(tx_desc, complq, &tx_q,
>> + gen_flag);
>> + if (ctype == IDPF_TXD_COMPLT_SW_MARKER) {
>> + idpf_queue_clear(SW_MARKER, tx_q);
>> + if (txq == tx_q)
>> + break;
>> + } else if (ctype == -ENODATA) {
>> + usleep_range(500, 1000);
>> + continue;
>> + }
>> +
>> + pos++;
>> + ntc++;
>> + if (unlikely(!ntc)) {
>> + ntc -= complq->desc_count;
>> + pos = 0;
>> + gen_flag = !gen_flag;
>> + }
>> +
>> + tx_desc = flow ? &complq->comp[pos].common :
>> + &complq->comp_4b[pos];
>> + prefetch(tx_desc);
>> + } while (time_before(jiffies, timeout));
>
> what if timeout expires and you didn't find the marker desc? why do you
Then we'll print "failed to receive marker" and that's it. Usually that
happens only if HW went out for cigarettes and won't come back until
a full power cycle. In that case, timeout prevents the kernel from hanging.
> need timer? couldn't you scan the whole ring instead?
Queue destroy marker is always the last written descriptor, there's no
point in scanning the whole ring.
The marker arrives as the CP receives the virtchnl message, queues the
queue (lol) for destroying and sends the marker. This may take up to
several msecs, but you never know.
So you anyway need a loop with some sane sleeps (here it's 500-1000 usec
and it usually takes 2-3 iterations).
>
>> +
>> + idpf_tx_update_complq_indexes(complq, ntc, gen_flag);
>> +}
Thanks,
Olek
Powered by blists - more mailing lists