[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACGkMEumhkBShqXXbWXviS+xZA1aYrnZFoU_avdsWZ_9sBAwUQ@mail.gmail.com>
Date: Wed, 28 Jun 2023 14:51:08 +0800
From: Jason Wang <jasowang@...hat.com>
To: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Cc: virtualization@...ts.linux-foundation.org,
"Michael S. Tsirkin" <mst@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH vhost v10 03/10] virtio_ring: split: support add premapped buf
On Wed, Jun 28, 2023 at 2:02 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
>
> On Wed, 28 Jun 2023 12:07:10 +0800, Jason Wang <jasowang@...hat.com> wrote:
> > On Tue, Jun 27, 2023 at 5:05 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> > >
> > > On Tue, 27 Jun 2023 16:03:26 +0800, Jason Wang <jasowang@...hat.com> wrote:
> > > > On Fri, Jun 2, 2023 at 5:22 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> > > > >
> > > > > If the vq is the premapped mode, use the sg_dma_address() directly.
> > > > >
> > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> > > > > ---
> > > > > drivers/virtio/virtio_ring.c | 46 ++++++++++++++++++++++--------------
> > > > > 1 file changed, 28 insertions(+), 18 deletions(-)
> > > > >
> > > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > > index 2afdfb9e3e30..18212c3e056b 100644
> > > > > --- a/drivers/virtio/virtio_ring.c
> > > > > +++ b/drivers/virtio/virtio_ring.c
> > > > > @@ -598,8 +598,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> > > > > for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > > > > dma_addr_t addr;
> > > > >
> > > > > - if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr))
> > > > > - goto unmap_release;
> > > > > + if (vq->premapped) {
> > > > > + addr = sg_dma_address(sg);
> > > > > + } else {
> > > > > + if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr))
> > > > > + goto unmap_release;
> > > > > + }
> > > >
> > > > Btw, I wonder whether or not it would be simple to implement the
> > > > vq->premapped check inside vring_map_one_sg() assuming the
> > > > !use_dma_api is done there as well.
> > >
> > >
> > > YES,
> > >
> > > That will more simple for the caller.
> > >
> > > But we will have things like:
> > >
> > > int func(bool do)
> > > {
> > > if (!do)
> > > return;
> > > }
> > >
> > > I like this way, but you don't like it in last version.
> >
> > I see :)
> >
> > So I think it depends on the error handling path, we should choose a
> > way that can let us easily deal with errors.
> >
> > For example, it seems the current approach is better since it doesn't
> > need to change the unmap_release.
>
> NO,
>
> The unmap_release is same for two way.
>
> Thanks.
Ok, so either is fine for me.
Thanks
>
>
> >
> > Thanks
> >
> > >
> > > >
> > > > >
> > > > > prev = i;
> > > > > /* Note that we trust indirect descriptor
> > > > > @@ -614,8 +618,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> > > > > for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > > > > dma_addr_t addr;
> > > > >
> > > > > - if (vring_map_one_sg(vq, sg, DMA_FROM_DEVICE, &addr))
> > > > > - goto unmap_release;
> > > > > + if (vq->premapped) {
> > > > > + addr = sg_dma_address(sg);
> > > > > + } else {
> > > > > + if (vring_map_one_sg(vq, sg, DMA_FROM_DEVICE, &addr))
> > > > > + goto unmap_release;
> > > > > + }
> > > > >
> > > > > prev = i;
> > > > > /* Note that we trust indirect descriptor
> > > > > @@ -689,21 +697,23 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> > > > > return 0;
> > > > >
> > > > > unmap_release:
> > > > > - err_idx = i;
> > > > > + if (!vq->premapped) {
> > > >
> > > > Can vq->premapped be true here? The label is named as "unmap_relase"
> > > > which implies "map" beforehand which seems not the case for
> > > > premapping.
> > >
> > > I see.
> > >
> > > Rethink about this, there is a better way.
> > > I will fix in next version.
> > >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > > + err_idx = i;
> > > > >
> > > > > - if (indirect)
> > > > > - i = 0;
> > > > > - else
> > > > > - i = head;
> > > > > -
> > > > > - for (n = 0; n < total_sg; n++) {
> > > > > - if (i == err_idx)
> > > > > - break;
> > > > > - if (indirect) {
> > > > > - vring_unmap_one_split_indirect(vq, &desc[i]);
> > > > > - i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> > > > > - } else
> > > > > - i = vring_unmap_one_split(vq, i);
> > > > > + if (indirect)
> > > > > + i = 0;
> > > > > + else
> > > > > + i = head;
> > > > > +
> > > > > + for (n = 0; n < total_sg; n++) {
> > > > > + if (i == err_idx)
> > > > > + break;
> > > > > + if (indirect) {
> > > > > + vring_unmap_one_split_indirect(vq, &desc[i]);
> > > > > + i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> > > > > + } else
> > > > > + i = vring_unmap_one_split(vq, i);
> > > > > + }
> > > > > }
> > > > >
> > > > > if (indirect)
> > > > > --
> > > > > 2.32.0.3.g01195cf9f
> > > > >
> > > >
> > >
> >
>
Powered by blists - more mailing lists