[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F3907EB.4030402@redhat.com>
Date: Mon, 13 Feb 2012 14:54:03 +0200
From: Dor Laor <dlaor@...hat.com>
To: "Nicholas A. Bellinger" <nab@...ux-iscsi.org>
CC: Christian Borntraeger <borntraeger@...ibm.com>,
James Bottomley <James.Bottomley@...senpartnership.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Christian Hoff <christian.hoff@...ibm.com>,
borntrae@...ux.vnet.ibm.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
mst@...hat.com, rusty@...tcorp.com.au,
Stefan Hajnoczi <stefanha@...il.com>,
target-devel <target-devel@...r.kernel.org>
Subject: Re: Pe: [PATCH v5 1/3] virtio-scsi: first version
On 02/13/2012 02:40 PM, Nicholas A. Bellinger wrote:
> Hi Dor, James& Co,
>
> On Mon, 2012-02-13 at 09:57 +0200, Dor Laor wrote:
>> On 02/13/2012 09:05 AM, Christian Borntraeger wrote:
>>> On 12/02/12 21:16, James Bottomley wrote:
>>>> Well, no-one's yet answered the question I had about why.
>>>
>>> Just to give one example from a different angle:
>>> In the big datacenters tape libraries are still very important, and lots
>>> of them have a scsi attachement. virtio-blk certainly is not the right
>>> way to handle those. Furthermore it seems even pretty hard to craft
>>> a virtio-tape since most of those libraries have vendor specific library
>>> controls (via sg). We would need to duplicate scsi generic (hint, hint :-)
>>>
>>>> virtio-scsi seems to be a basic duplication of virtio-blk except that it seems to
>>>> fix some problems virtio-blk has. Namely queue parameter discover,
>>>> which virtio-blk doesn't seem to do. There may also be a reason to cut
>>>> the stack lower down. Error handling is most often cited for this, but
>>>> no-one's satisfactorily explaned why it's better to do error handling in
>>>> the guest instead of the host.
>>>>
>>>> Could someone please explain to me why you can't simply fix virtio-blk?
>>>
>>> I dont think that virtio-scsi will replace virtio-blk everywhere. For non-scsi
>>> block devices, image files or logical volumes virtio-blk seems to be the right
>>> approach, I think.
>>
>> +1
>>
>> virtio-scsi is superior w.r.t:
>> - Device support: tapes, cdroms, other
>
> AFAICT any type of non TYPE_DISK struct scsi_device passthrough is going
> to currently require virtio-scsi in order to work.
>
>> - Does guest-host mapped multipath
>
> The logic that comes with target_core_fabric_configfs.c and the native
> target control plane gives a host-side (tcm_vhost) fabric driver generic
> explict/implict ALUA multipath support by default.
>
> I think there are some interesting possibilities for paravirtualized
> ALUA multipath.. 8-)
>
>> - Supports plenty of virtual disks mapped to the guest w/o need for a
>> pci slot per each virtio-blk
>
> Ouch, virtio-blk lacks multi-lun per pci slot support..?
Only if you use the pci multi-function option but that kills standard
hot unplug
>
>> - offload fancy/new/sophisticated scsi commands from the guest to the
>> storage array w/o need for qemu implementation. Example XCOPY.
>>
>
> ...
>
>> There are some more goodies like ability to support windows guest
>> clustering w/o hacky versions of scsi pass through over virtio-blk.
>> virtio-blk is also a candidate to change the request based towards bio
>> based implementation, so sticking to it does not buy us too much.
>>
>
> MSFT cluster guests that require SPC-3 PR support can run today with
> tcm_loop LLD SCSI LUNs + SG_IO/BSG + right megasas QEMU HBA emulation,
> but I do agree this would be better served by virtio-scsi for guests
> that require SPC-3 PR support or passthrough.
>
> --nab
>
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists