[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALDO+Sbjweh5RS5Oqce3cqBhhyL1RbG7z40fe=mjNYXvhbFhqg@mail.gmail.com>
Date: Sun, 8 Jan 2023 06:27:40 -0800
From: William Tu <u9012063@...il.com>
To: Alexander H Duyck <alexander.duyck@...il.com>
Cc: netdev@...r.kernel.org, tuc@...are.com, gyang@...are.com,
doshir@...are.com, gerhard@...leder-embedded.com,
alexandr.lobakin@...el.com, bang@...are.com
Subject: Re: [RFC PATCH net-next v10] vmxnet3: Add XDP support.
Hi Alexander,
Thanks for your review!
On Fri, Jan 6, 2023 at 9:21 AM Alexander H Duyck
<alexander.duyck@...il.com> wrote:
>
> On Thu, 2023-01-05 at 14:31 -0800, William Tu wrote:
> > The patch adds native-mode XDP support: XDP DROP, PASS, TX, and REDIRECT.
<...>
> > static void
> > -vmxnet3_unmap_tx_buf(struct vmxnet3_tx_buf_info *tbi,
> > - struct pci_dev *pdev)
> > +vmxnet3_unmap_tx_buf(struct vmxnet3_tx_buf_info *tbi, struct pci_dev *pdev,
> > + struct xdp_frame_bulk *bq)
> > {
> > - if (tbi->map_type == VMXNET3_MAP_SINGLE)
> > + switch (tbi->map_type) {
> > + case VMXNET3_MAP_SINGLE:
> > + case VMXNET3_MAP_SINGLE | VMXNET3_MAP_XDP:
> > dma_unmap_single(&pdev->dev, tbi->dma_addr, tbi->len,
> > DMA_TO_DEVICE);
> > - else if (tbi->map_type == VMXNET3_MAP_PAGE)
> > + break;
> > + case VMXNET3_MAP_PAGE:
> > dma_unmap_page(&pdev->dev, tbi->dma_addr, tbi->len,
> > DMA_TO_DEVICE);
> > - else
> > + break;
> > + case VMXNET3_MAP_XDP:
> > + break;
> > + default:
> > BUG_ON(tbi->map_type != VMXNET3_MAP_NONE);
> > + }
> > +
> > + if (tbi->map_type & VMXNET3_MAP_XDP)
> > + xdp_return_frame_bulk(tbi->xdpf, bq);
> >
> > tbi->map_type = VMXNET3_MAP_NONE; /* to help debugging */
> > }
>
> This may not be right place to be returning the XDP frame. More on that
> below. If that is the case it might be better to look at just replacing
> the == with an & check to see if the bit is set rather then use the
> switch statement. The else/BUG_ON might need to be tweaked to exclude
> MAP_XDP from the map_type via a "(& ~)".
>
> > @@ -343,22 +354,29 @@ static int
> > vmxnet3_unmap_pkt(u32 eop_idx, struct vmxnet3_tx_queue *tq,
> > struct pci_dev *pdev, struct vmxnet3_adapter *adapter)
> > {
> > - struct sk_buff *skb;
> > + struct vmxnet3_tx_buf_info *tbi;
> > + struct sk_buff *skb = NULL;
> > + struct xdp_frame_bulk bq;
> > int entries = 0;
> >
> > /* no out of order completion */
> > BUG_ON(tq->buf_info[eop_idx].sop_idx != tq->tx_ring.next2comp);
> > BUG_ON(VMXNET3_TXDESC_GET_EOP(&(tq->tx_ring.base[eop_idx].txd)) != 1);
> >
> > - skb = tq->buf_info[eop_idx].skb;
> > - BUG_ON(skb == NULL);
> > - tq->buf_info[eop_idx].skb = NULL;
> > + tbi = &tq->buf_info[eop_idx];
> > + if (!(tbi->map_type & VMXNET3_MAP_XDP)) {
> > + skb = tq->buf_info[eop_idx].skb;
> > + BUG_ON(!skb);
> > + tq->buf_info[eop_idx].skb = NULL;
> > + }
> >
> > VMXNET3_INC_RING_IDX_ONLY(eop_idx, tq->tx_ring.size);
> >
> > + xdp_frame_bulk_init(&bq);
> > +
> > while (tq->tx_ring.next2comp != eop_idx) {
> > vmxnet3_unmap_tx_buf(tq->buf_info + tq->tx_ring.next2comp,
> > - pdev);
> > + pdev, &bq);
> >
> > /* update next2comp w/o tx_lock. Since we are marking more,
> > * instead of less, tx ring entries avail, the worst case is
> > @@ -369,7 +387,11 @@ vmxnet3_unmap_pkt(u32 eop_idx, struct vmxnet3_tx_queue *tq,
> > entries++;
> > }
> >
> > - dev_kfree_skb_any(skb);
> > + xdp_flush_frame_bulk(&bq);
> > +
> > + if (skb)
> > + dev_kfree_skb_any(skb);
> > +
> > return entries;
> > }
> >
>
> Based on the naming I am assuming vmxnet3_unmap_pkt is a per packet
> call? If so we are defeating the bulk freeing doing this here. I can't
> help but wonder if we have this operating at the correct level. It
> might make more sense to do the bulk_init and flush_frame_bulk in
> vmxnet3_tq_tx_complete and pass the bulk queue down to this function.
Yes, vmxnet3_unmap_pkt is per-packet, and I got your point.
The current code indeed doesn't free in batch.
I should move the bulk init and flush to up level, vmxnet3_tq_tx_complete.
>
> Specifically XDP frames are now capable of being multi-buffer. So it
> might make sense to have vmnet3_unmap_tx_buf stick to just doing the
> dma unmapping and instead have the freeing of the buffer XDP frame
> handled here where you would have handled dev_kfree_skb_any. You could
> then push the bulk_init and flush up one level to the caller you
> actually get some bulking.
Got it! thanks!
>
>
> > @@ -414,26 +436,37 @@ static void
> > vmxnet3_tq_cleanup(struct vmxnet3_tx_queue *tq,
> > struct vmxnet3_adapter *adapter)
> > {
> > + struct xdp_frame_bulk bq;
> > int i;
> >
> > + xdp_frame_bulk_init(&bq);
> > +
> > while (tq->tx_ring.next2comp != tq->tx_ring.next2fill) {
> > struct vmxnet3_tx_buf_info *tbi;
> >
> > tbi = tq->buf_info + tq->tx_ring.next2comp;
> >
> > - vmxnet3_unmap_tx_buf(tbi, adapter->pdev);
> > - if (tbi->skb) {
> > + vmxnet3_unmap_tx_buf(tbi, adapter->pdev, &bq);
> > + switch (tbi->map_type) {
> > + case VMXNET3_MAP_SINGLE:
> > + case VMXNET3_MAP_PAGE:
> > + if (!tbi->skb)
> > + break;
> > dev_kfree_skb_any(tbi->skb);
> > tbi->skb = NULL;
> > + break;
> > + case VMXNET3_MAP_XDP:
> > + default:
> > + break;
> > }
>
> This can probably be simplified. Basically if tbi->skb && !(map_type &
> VMXNET3_MAP_XDP) then you have an skb to free. No need for the switch
> statement.
>
> This too could benefit from keeping the buffer freeing out of
> vmxnet3_unmap_tx_buf since we could just do something like:
> vmxnet3_unmap_tx_buf()
> /* xdpf and skb are in an anonymous union, if set we need to
> * free a buffer.
> */
> if (tbi->skb) {
> if (tbi->map_type & VMXNET3_MAP_XDP)
> xdp_return_frame_bulk(tbi->xdpf, bq);
> else
> dev_kfree_skb_any(tbi->skb);
> tbi->skb = NULL;
> }
thanks, will do it.
William
Powered by blists - more mailing lists