[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1706757660.3554723-2-xuanzhuo@linux.alibaba.com>
Date: Thu, 1 Feb 2024 11:21:00 +0800
From: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
To: Jason Wang <jasowang@...hat.com>
Cc: virtualization@...ts.linux.dev,
Richard Weinberger <richard@....at>,
Anton Ivanov <anton.ivanov@...bridgegreys.com>,
Johannes Berg <johannes@...solutions.net>,
"Michael S. Tsirkin" <mst@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Hans de Goede <hdegoede@...hat.com>,
Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>,
Vadim Pasternak <vadimp@...dia.com>,
Bjorn Andersson <andersson@...nel.org>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
Cornelia Huck <cohuck@...hat.com>,
Halil Pasic <pasic@...ux.ibm.com>,
Eric Farman <farman@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Benjamin Berg <benjamin.berg@...el.com>,
Yang Li <yang.lee@...ux.alibaba.com>,
linux-um@...ts.infradead.org,
netdev@...r.kernel.org,
platform-driver-x86@...r.kernel.org,
linux-remoteproc@...r.kernel.org,
linux-s390@...r.kernel.org,
kvm@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH vhost 17/17] virtio_net: sq support premapped mode
On Wed, 31 Jan 2024 17:12:47 +0800, Jason Wang <jasowang@...hat.com> wrote:
> On Tue, Jan 30, 2024 at 7:43 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> >
> > If the xsk is enabling, the xsk tx will share the send queue.
> > But the xsk requires that the send queue use the premapped mode.
> > So the send queue must support premapped mode.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> > ---
> > drivers/net/virtio_net.c | 167 ++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 163 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 226ab830870e..cf0c67380b07 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -46,6 +46,7 @@ module_param(napi_tx, bool, 0644);
> > #define VIRTIO_XDP_REDIR BIT(1)
> >
> > #define VIRTIO_XDP_FLAG BIT(0)
> > +#define VIRTIO_DMA_FLAG BIT(1)
> >
> > /* RX packet size EWMA. The average packet size is used to determine the packet
> > * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > @@ -140,6 +141,21 @@ struct virtnet_rq_dma {
> > u16 need_sync;
> > };
> >
> > +struct virtnet_sq_dma {
> > + union {
> > + struct virtnet_sq_dma *next;
> > + void *data;
> > + };
> > + dma_addr_t addr;
> > + u32 len;
> > + bool is_tail;
> > +};
> > +
> > +struct virtnet_sq_dma_head {
> > + struct virtnet_sq_dma *free;
> > + struct virtnet_sq_dma *head;
>
> Any reason the head must be a pointer instead of a simple index?
The head is used for kfree.
Maybe I need to rename it.
About the index(next) of the virtnet_sq_dma.
If we use the index, the struct will be:
struct virtnet_sq_dma {
dma_addr_t addr;
u32 len;
u32 next;
void *data
};
The size of virtnet_sq_dma is same.
>
> > +};
> > +
> > /* Internal representation of a send virtqueue */
> > struct send_queue {
> > /* Virtqueue associated with this send _queue */
> > @@ -159,6 +175,8 @@ struct send_queue {
> >
> > /* Record whether sq is in reset state. */
> > bool reset;
> > +
> > + struct virtnet_sq_dma_head dmainfo;
> > };
> >
....
> > +
> > +static int virtnet_sq_init_dma_mate(struct send_queue *sq)
> > +{
> > + struct virtnet_sq_dma *d;
> > + int size, i;
> > +
> > + size = virtqueue_get_vring_size(sq->vq);
> > +
> > + size += MAX_SKB_FRAGS + 2;
>
> Is this enough for the case where an indirect descriptor is used?
This is for the case, when the ring is full, the xmit_skb is called.
I will add comment.
Thanks.
>
> Thanks
>
Powered by blists - more mailing lists