[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F38D5BC.8010705@redhat.com>
Date: Mon, 13 Feb 2012 10:19:56 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: James Bottomley <James.Bottomley@...senPartnership.com>
CC: Christian Hoff <christian.hoff@...ibm.com>,
BORNTRAE@...ux.vnet.ibm.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
mst@...hat.com, rusty@...tcorp.com.au
Subject: Re: Pe: [PATCH v5 1/3] virtio-scsi: first version
On 02/12/2012 09:16 PM, James Bottomley wrote:
> Well, no-one's yet answered the question I had about why. virtio-scsi
> seems to be a basic duplication of virtio-blk except that it seems to
> fix some problems virtio-blk has. Namely queue parameter discover,
> which virtio-blk doesn't seem to do.
The biggest differences between virtio-blk and virtio-scsi are that:
1) how the feature set is defined. virtio-blk defines the feature set
of the device through a shared spec between the guest and the host. The
virtio-scsi spec does not define a feature set for the devices, only for
the transport. Introducing new features in the guest does not need to
be done specifically for virt, it can be done in generic code (sd.c).
This results in a large feature set and at the same time a very stable spec.
Right now virtio-blk covers common usecases nicely. However, the Linux
block layer _is_ growing support for new operations: discard is already
there, write same is in the works, extended copy will also come in due
time. Perhaps we'll add them to virtio-blk, perhaps not. If we will,
we will have to modify the spec, the host implementation, and the guest
drivers for each possible guest OS. virtio-scsi will support them
transparently. Depending on your configuration, it might work without
touching the host at all.
2) for disks with SCSI attachment, the native interface is exposed
precisely as it is in the host. I think we had some misunderstanding
WRT queue parameter discovery. My concern with virtio-blk's SG_IO
support is more general than that. It is that SG_IO accesses the host
disk, not the guest disk. They will have the same data, but they are
effectively different disks. For example they might have different
queue parameters, hence the misunderstanding.
People are mostly using the SG_IO interface for sane purposes. For
example you can ping the storage with INQUIRY commands to detect
problems on the NAS or SAN. For these usecases the difference does not
matter. However, there _are_ worrisome usecases for SG_IO that people
are looking at. For example installing vendor backup tools in their
guests. These tools send vendor-specific commands to the disks.
Nothing particularly insane about that, but we want them to do it using
a saner interface than VIRTIO_BLK_T_SCSI_CMD.
On top of this, only virtio-scsi obviously will support devices such as
tapes.
> There may also be a reason to cut the stack lower down. Error
> handling is most often cited for this, but no-one's satisfactorily
> explaned why it's better to do error handling in the guest instead of
> the host.
It's not necessarily better. However error handling in the host may
simply not be there. This is for example the case of NFS-based storage
with the "hard" option.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists