[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zrxw+FI7rbYHXN2d@boxer>
Date: Wed, 14 Aug 2024 10:55:20 +0200
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: Kurt Kanzenbach <kurt@...utronix.de>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>, <davem@...emloft.net>,
<kuba@...nel.org>, <pabeni@...hat.com>, <edumazet@...gle.com>,
<netdev@...r.kernel.org>, Sriram Yagnaraman <sriram.yagnaraman@....tech>,
<magnus.karlsson@...el.com>, <ast@...nel.org>, <daniel@...earbox.net>,
<hawk@...nel.org>, <john.fastabend@...il.com>, <bpf@...r.kernel.org>,
<sriram.yagnaraman@...csson.com>, <richardcochran@...il.com>,
<benjamin.steinke@...s-audio.com>, <bigeasy@...utronix.de>, "Chandan Kumar
Rout" <chandanx.rout@...el.com>
Subject: Re: [PATCH net-next 4/4] igb: add AF_XDP zero-copy Tx support
On Wed, Aug 14, 2024 at 10:36:32AM +0200, Kurt Kanzenbach wrote:
> On Sat Aug 10 2024, Maciej Fijalkowski wrote:
> >> + nb_pkts = xsk_tx_peek_release_desc_batch(pool, budget);
> >> + if (!nb_pkts)
> >> + return true;
> >> +
> >> + while (nb_pkts-- > 0) {
> >> + dma = xsk_buff_raw_get_dma(pool, descs[i].addr);
> >> + xsk_buff_raw_dma_sync_for_device(pool, dma, descs[i].len);
> >> +
> >> + tx_buffer_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
> >> + tx_buffer_info->bytecount = descs[i].len;
> >> + tx_buffer_info->type = IGB_TYPE_XSK;
> >> + tx_buffer_info->xdpf = NULL;
> >> + tx_buffer_info->gso_segs = 1;
> >> + tx_buffer_info->time_stamp = jiffies;
> >> +
> >> + tx_desc = IGB_TX_DESC(tx_ring, tx_ring->next_to_use);
> >> + tx_desc->read.buffer_addr = cpu_to_le64(dma);
> >> +
> >> + /* put descriptor type bits */
> >> + cmd_type = E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_DEXT |
> >> + E1000_ADVTXD_DCMD_IFCS;
> >> + olinfo_status = descs[i].len << E1000_ADVTXD_PAYLEN_SHIFT;
> >> +
> >> + cmd_type |= descs[i].len | IGB_TXD_DCMD;
> >
> > This is also sub-optimal as you are setting RS bit on each Tx descriptor,
> > which will in turn raise a lot of irqs. See how ice sets RS bit only on
> > last desc from a batch and then, on cleaning side, how it finds a
> > descriptor that is supposed to have DD bit written by HW.
>
> I see your point. That requires changes to the cleaning side. However,
> igb_clean_tx_irq() is shared between normal and zero-copy path.
Ok if that's too much of a hassle then let's leave it as-is. I can address
that in some nearby future.
>
> The amount of irqs can be also controlled by irq coalescing or even
> using busy polling. So I'd rather keep this implementation as simple as
> it is now.
That has nothing to do with what I was describing.
>
> Thanks,
> Kurt
Powered by blists - more mailing lists