[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZryGUj7HBasW7aRI@boxer>
Date: Wed, 14 Aug 2024 12:26:26 +0200
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: Kurt Kanzenbach <kurt@...utronix.de>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>, <davem@...emloft.net>,
<kuba@...nel.org>, <pabeni@...hat.com>, <edumazet@...gle.com>,
<netdev@...r.kernel.org>, Sriram Yagnaraman <sriram.yagnaraman@....tech>,
<magnus.karlsson@...el.com>, <ast@...nel.org>, <daniel@...earbox.net>,
<hawk@...nel.org>, <john.fastabend@...il.com>, <bpf@...r.kernel.org>,
<sriram.yagnaraman@...csson.com>, <richardcochran@...il.com>,
<benjamin.steinke@...s-audio.com>, <bigeasy@...utronix.de>, "Chandan Kumar
Rout" <chandanx.rout@...el.com>
Subject: Re: [PATCH net-next 4/4] igb: add AF_XDP zero-copy Tx support
On Wed, Aug 14, 2024 at 11:12:30AM +0200, Kurt Kanzenbach wrote:
> On Wed Aug 14 2024, Maciej Fijalkowski wrote:
> > On Wed, Aug 14, 2024 at 10:36:32AM +0200, Kurt Kanzenbach wrote:
> >> On Sat Aug 10 2024, Maciej Fijalkowski wrote:
> >> >> + nb_pkts = xsk_tx_peek_release_desc_batch(pool, budget);
> >> >> + if (!nb_pkts)
> >> >> + return true;
> >> >> +
> >> >> + while (nb_pkts-- > 0) {
> >> >> + dma = xsk_buff_raw_get_dma(pool, descs[i].addr);
> >> >> + xsk_buff_raw_dma_sync_for_device(pool, dma, descs[i].len);
> >> >> +
> >> >> + tx_buffer_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
> >> >> + tx_buffer_info->bytecount = descs[i].len;
> >> >> + tx_buffer_info->type = IGB_TYPE_XSK;
> >> >> + tx_buffer_info->xdpf = NULL;
> >> >> + tx_buffer_info->gso_segs = 1;
> >> >> + tx_buffer_info->time_stamp = jiffies;
> >> >> +
> >> >> + tx_desc = IGB_TX_DESC(tx_ring, tx_ring->next_to_use);
> >> >> + tx_desc->read.buffer_addr = cpu_to_le64(dma);
> >> >> +
> >> >> + /* put descriptor type bits */
> >> >> + cmd_type = E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_DEXT |
> >> >> + E1000_ADVTXD_DCMD_IFCS;
> >> >> + olinfo_status = descs[i].len << E1000_ADVTXD_PAYLEN_SHIFT;
> >> >> +
> >> >> + cmd_type |= descs[i].len | IGB_TXD_DCMD;
> >> >
> >> > This is also sub-optimal as you are setting RS bit on each Tx descriptor,
> >> > which will in turn raise a lot of irqs. See how ice sets RS bit only on
> >> > last desc from a batch and then, on cleaning side, how it finds a
> >> > descriptor that is supposed to have DD bit written by HW.
> >>
> >> I see your point. That requires changes to the cleaning side. However,
> >> igb_clean_tx_irq() is shared between normal and zero-copy path.
> >
> > Ok if that's too much of a hassle then let's leave it as-is. I can address
> > that in some nearby future.
>
> How would you do that, by adding a dedicated igb_clean_tx_irq_zc()
> function? Or is there a more simple way?
Yes that would be my first approach.
>
> BTW: This needs to be addressed in igc too.
Argh!
>
> >
> >>
> >> The amount of irqs can be also controlled by irq coalescing or even
> >> using busy polling. So I'd rather keep this implementation as simple as
> >> it is now.
> >
> > That has nothing to do with what I was describing.
>
> Ok, maybe I misunderstood your suggestion. It seemed to me that adding
> the RS bit to the last frame of the burst will reduce the amount of
> raised irqs.
You got it right, but I don't think it's related to any outer settings.
The main case here is that by doing what I proposed you get much less PCIe
traffic which in turn yields better performance.
>
> Thanks,
> Kurt
Powered by blists - more mailing lists