[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGF4SLipgZ9e9qNWcJqvECDWuVbBMaAYPvXTqxbaUDeh7O-SkA@mail.gmail.com>
Date: Tue, 6 Nov 2018 14:47:30 -0500
From: Vitaly Mayatskih <v.mayatskih@...il.com>
To: stefanha@...il.com
Cc: "Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] Add vhost_blk driver
On Tue, Nov 6, 2018 at 11:03 AM Stefan Hajnoczi <stefanha@...il.com> wrote:
> Did you look at vhost-user-blk? It does things slightly differently:
> more of the virtio-blk device model is handled by the vhost-user device
> (e.g. config space). That might be necessary to implement
> virtio_blk_config.writeback properly.
Yes, vhost-user-blk was used as a template.
> > +enum {
> > + VHOST_BLK_VQ_MAX = 16,
> > + VHOST_BLK_VQ_MAX_REQS = 128,
> > +};
>
> These limits seem arbitrary and probably too low.
It fits cache and TLB, since the data structures are statically
allocated. I saw a worse performance with bigger max-reqs. I'll make
it configurable.
> > + if ((type == VIRTIO_BLK_T_IN) || (type == VIRTIO_BLK_T_OUT)) {
> > + bool write = (type == VIRTIO_BLK_T_OUT);
> > + int nr_seg = (write ? req->out_num : req->in_num) - 1;
> > + unsigned long sector = le64_to_cpu(req->hdr.sector);
>
> Using little-endian instead of the virtio types means that only VIRTIO
> 1.0 modern devices are supported (older devices may not be
> little-endian!). In that case you'd need to put the VIRTIO_1 feature
> bit into the features mask.
Yeah, I'm making first baby steps in virtio ;) Thanks!
> > + switch (ioctl) {
> > + case VHOST_SET_MEM_TABLE:
> > + vhost_blk_stop(blk);
> > + ret = vhost_blk_pass_ioctl(blk, ioctl, argp);
> > + break;
>
> Why is this necessary? Existing vhost devices pass the ioctl through
> without an explicit case for it.
vq->private_data is populated, vhost_set_vring_num returns -EBUSY if
ioctl is passed as is. It can be a bug in vhost, too, but I don't have
enough knowledge.
diff --git a/drivers/vhost/blk.c b/drivers/vhost/blk.c
index aefb9a61fa0f..da3eb041a975 100644
--- a/drivers/vhost/blk.c
+++ b/drivers/vhost/blk.c
@@ -438,7 +438,7 @@ static long vhost_blk_ioctl(struct file *f,
unsigned int ioctl,
switch (ioctl) {
case VHOST_SET_MEM_TABLE:
- vhost_blk_stop(blk);
+// vhost_blk_stop(blk);
ret = vhost_blk_pass_ioctl(blk, ioctl, argp);
break;
case VHOST_SET_VRING_NUM:
./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -cpu host -smp 4 -m
2G -vnc 192.168.122.104:5 --drive
if=none,id=drive0,format=raw,file=/dev/vde -device
vhost-blk-pci,id=blk0,drive=drive0,num-queues=4
qemu-system-x86_64: vhost_set_vring_num failed: Device or resource busy (16)
qemu-system-x86_64: Error starting vhost: 16
> > + case VHOST_SET_VRING_NUM:
> > + if (copy_from_user(&s, argp, sizeof(s)))
> > + return -EFAULT;
> > + ret = vhost_blk_pass_ioctl(blk, ioctl, argp);
> > + if (!ret)
> > + blk->num_queues = s.index + 1;
>
> Where is this input checked against ARRAY_SIZE(blk->queue)?
In vhost itself. I capture the parameter only if vhost ioctl completes
without errors. Perhaps, worth a comment.
Thanks for review, this is exactly what I hoping for!
--
wbr, Vitaly
Powered by blists - more mailing lists