[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e863a394-a2af-505b-5c5c-cbf8b4a1819f@redhat.com>
Date: Fri, 1 Jul 2022 09:11:50 +0200
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: William Tu <u9012063@...il.com>, netdev@...r.kernel.org
Cc: brouer@...hat.com, doshir@...are.com, jbrouer@...hat.com,
lorenzo.bianconi@...hat.com, gyang@...are.com,
William Tu <tuc@...are.com>
Subject: Re: [RFC PATCH 1/2] vmxnet3: Add basic XDP support.
On 29/06/2022 03.49, William Tu wrote:
> The patch adds native-mode XDP support: XDP_DROP, XDP_PASS, and XDP_TX.
> The vmxnet3 rx consists of three rings: r0, r1, and dataring.
> Buffers at r0 are allocated using alloc_skb APIs and dma mapped to the
> ring's descriptor. If LRO is enabled and packet size is larger than
> 3K, VMXNET3_MAX_SKB_BUF_SIZE, then r1 is used to mapped the rest of
> the buffer larger than VMXNET3_MAX_SKB_BUF_SIZE. Each buffer in r1 is
> allocated using alloc_page. So for LRO packets, the payload will be
> in one buffer from r0 and multiple from r1, for non-LRO packets,
> only one descriptor in r0 is used for packet size less than 3k.
>
[...]
>
> Need Feebacks:
[...]
> e. I should be able to move the run_xdp before the
> netdev_alloc_skb_ip_align() in vmxnet3_rq_rx_complete
> so avoiding the skb allocation overhead.
Yes please!
Generally speaking the approach of allocating an SKB and then afterwards
invoking XDP BPF-prog goes against the principle of native-XDP.
[...]> Signed-off-by: William Tu <tuc@...are.com>
> ---
> drivers/net/vmxnet3/vmxnet3_drv.c | 360 +++++++++++++++++++++++++-
> drivers/net/vmxnet3/vmxnet3_ethtool.c | 10 +
> drivers/net/vmxnet3/vmxnet3_int.h | 16 ++
> 3 files changed, 382 insertions(+), 4 deletions(-)
>
--Jesper
(sorry for the short feedback)
Powered by blists - more mailing lists