[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874j7nzejz.fsf@kurt.kurt.home>
Date: Wed, 14 Aug 2024 10:36:32 +0200
From: Kurt Kanzenbach <kurt@...utronix.de>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Tony Nguyen
<anthony.l.nguyen@...el.com>
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
edumazet@...gle.com, netdev@...r.kernel.org, Sriram Yagnaraman
<sriram.yagnaraman@....tech>, magnus.karlsson@...el.com, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
bpf@...r.kernel.org, sriram.yagnaraman@...csson.com,
richardcochran@...il.com, benjamin.steinke@...s-audio.com,
bigeasy@...utronix.de, Chandan Kumar
Rout <chandanx.rout@...el.com>
Subject: Re: [PATCH net-next 4/4] igb: add AF_XDP zero-copy Tx support
On Sat Aug 10 2024, Maciej Fijalkowski wrote:
>> + nb_pkts = xsk_tx_peek_release_desc_batch(pool, budget);
>> + if (!nb_pkts)
>> + return true;
>> +
>> + while (nb_pkts-- > 0) {
>> + dma = xsk_buff_raw_get_dma(pool, descs[i].addr);
>> + xsk_buff_raw_dma_sync_for_device(pool, dma, descs[i].len);
>> +
>> + tx_buffer_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
>> + tx_buffer_info->bytecount = descs[i].len;
>> + tx_buffer_info->type = IGB_TYPE_XSK;
>> + tx_buffer_info->xdpf = NULL;
>> + tx_buffer_info->gso_segs = 1;
>> + tx_buffer_info->time_stamp = jiffies;
>> +
>> + tx_desc = IGB_TX_DESC(tx_ring, tx_ring->next_to_use);
>> + tx_desc->read.buffer_addr = cpu_to_le64(dma);
>> +
>> + /* put descriptor type bits */
>> + cmd_type = E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_DEXT |
>> + E1000_ADVTXD_DCMD_IFCS;
>> + olinfo_status = descs[i].len << E1000_ADVTXD_PAYLEN_SHIFT;
>> +
>> + cmd_type |= descs[i].len | IGB_TXD_DCMD;
>
> This is also sub-optimal as you are setting RS bit on each Tx descriptor,
> which will in turn raise a lot of irqs. See how ice sets RS bit only on
> last desc from a batch and then, on cleaning side, how it finds a
> descriptor that is supposed to have DD bit written by HW.
I see your point. That requires changes to the cleaning side. However,
igb_clean_tx_irq() is shared between normal and zero-copy path.
The amount of irqs can be also controlled by irq coalescing or even
using busy polling. So I'd rather keep this implementation as simple as
it is now.
Thanks,
Kurt
Download attachment "signature.asc" of type "application/pgp-signature" (862 bytes)
Powered by blists - more mailing lists