[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1687916061.7751381-1-xuanzhuo@linux.alibaba.com>
Date: Wed, 28 Jun 2023 09:34:21 +0800
From: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>,
virtualization@...ts.linux-foundation.org,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
netdev@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH vhost v10 02/10] virtio_ring: introduce virtqueue_set_premapped()
On Tue, 27 Jun 2023 10:56:54 -0400, "Michael S. Tsirkin" <mst@...hat.com> wrote:
> On Tue, Jun 27, 2023 at 04:50:01PM +0800, Xuan Zhuo wrote:
> > On Tue, 27 Jun 2023 16:03:23 +0800, Jason Wang <jasowang@...hat.com> wrote:
> > > On Fri, Jun 2, 2023 at 5:22 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> > > >
> > > > This helper allows the driver change the dma mode to premapped mode.
> > > > Under the premapped mode, the virtio core do not do dma mapping
> > > > internally.
> > > >
> > > > This just work when the use_dma_api is true. If the use_dma_api is false,
> > > > the dma options is not through the DMA APIs, that is not the standard
> > > > way of the linux kernel.
> > > >
> > > > Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> > > > ---
> > > > drivers/virtio/virtio_ring.c | 40 ++++++++++++++++++++++++++++++++++++
> > > > include/linux/virtio.h | 2 ++
> > > > 2 files changed, 42 insertions(+)
> > > >
> > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > index 72ed07a604d4..2afdfb9e3e30 100644
> > > > --- a/drivers/virtio/virtio_ring.c
> > > > +++ b/drivers/virtio/virtio_ring.c
> > > > @@ -172,6 +172,9 @@ struct vring_virtqueue {
> > > > /* Host publishes avail event idx */
> > > > bool event;
> > > >
> > > > + /* Do DMA mapping by driver */
> > > > + bool premapped;
> > > > +
> > > > /* Head of free buffer list. */
> > > > unsigned int free_head;
> > > > /* Number we've added since last sync. */
> > > > @@ -2059,6 +2062,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
> > > > vq->packed_ring = true;
> > > > vq->dma_dev = dma_dev;
> > > > vq->use_dma_api = vring_use_dma_api(vdev);
> > > > + vq->premapped = false;
> > > >
> > > > vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
> > > > !context;
> > > > @@ -2548,6 +2552,7 @@ static struct virtqueue *__vring_new_virtqueue(unsigned int index,
> > > > #endif
> > > > vq->dma_dev = dma_dev;
> > > > vq->use_dma_api = vring_use_dma_api(vdev);
> > > > + vq->premapped = false;
> > > >
> > > > vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
> > > > !context;
> > > > @@ -2691,6 +2696,41 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
> > > > }
> > > > EXPORT_SYMBOL_GPL(virtqueue_resize);
> > > >
> > > > +/**
> > > > + * virtqueue_set_premapped - set the vring premapped mode
> > > > + * @_vq: the struct virtqueue we're talking about.
> > > > + *
> > > > + * Enable the premapped mode of the vq.
> > > > + *
> > > > + * The vring in premapped mode does not do dma internally, so the driver must
> > > > + * do dma mapping in advance. The driver must pass the dma_address through
> > > > + * dma_address of scatterlist. When the driver got a used buffer from
> > > > + * the vring, it has to unmap the dma address. So the driver must call
> > > > + * virtqueue_get_buf_premapped()/virtqueue_detach_unused_buf_premapped().
> > > > + *
> > > > + * This must be called before adding any buf to vring.
> > >
> > > And any old buffer should be detached?
> >
> > I mean that before adding any buf, So there are not old buffer.
> >
>
> Oh. So put this in the same sentence:
>
> This function must be called immediately after creating the vq,
> or after vq reset, and before adding any buffers to it.
OK, thanks.
>
>
> > >
> > > > + * So this should be called immediately after init vq or vq reset.
>
> Do you really need to call this again after each reset?
YES
Thanks.
>
>
> > > Any way to detect and warn in this case? (not a must if it's too
> > > expensive to do the check)
> >
> >
> > I can try to check whether the qeueu is empty.
> >
> >
> > >
> > > > + *
> > > > + * Caller must ensure we don't call this with other virtqueue operations
> > > > + * at the same time (except where noted).
> > > > + *
> > > > + * Returns zero or a negative error.
> > > > + * 0: success.
> > > > + * -EINVAL: vring does not use the dma api, so we can not enable premapped mode.
> > > > + */
> > > > +int virtqueue_set_premapped(struct virtqueue *_vq)
> > > > +{
> > > > + struct vring_virtqueue *vq = to_vvq(_vq);
> > > > +
> > > > + if (!vq->use_dma_api)
> > > > + return -EINVAL;
> > > > +
> > > > + vq->premapped = true;
> > >
> > > I guess there should be a way to disable it. Would it be useful for
> > > the case when AF_XDP sockets were destroyed?
> >
> > Yes.
> >
> > When we reset the queue, the vq->premapped will be set to 0.
> >
> > The is called after find_vqs or reset vq.
> >
> > Thanks.
> >
> >
> >
> > >
> > > Thanks
> > >
> > >
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(virtqueue_set_premapped);
> > > > +
> > > > /* Only available for split ring */
> > > > struct virtqueue *vring_new_virtqueue(unsigned int index,
> > > > unsigned int num,
> > > > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > > > index b93238db94e3..1fc0e1023bd4 100644
> > > > --- a/include/linux/virtio.h
> > > > +++ b/include/linux/virtio.h
> > > > @@ -78,6 +78,8 @@ bool virtqueue_enable_cb(struct virtqueue *vq);
> > > >
> > > > unsigned virtqueue_enable_cb_prepare(struct virtqueue *vq);
> > > >
> > > > +int virtqueue_set_premapped(struct virtqueue *_vq);
> > > > +
> > > > bool virtqueue_poll(struct virtqueue *vq, unsigned);
> > > >
> > > > bool virtqueue_enable_cb_delayed(struct virtqueue *vq);
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > > >
> > >
>
Powered by blists - more mailing lists