[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180531163603.GC30954@lst.de>
Date: Thu, 31 May 2018 18:36:03 +0200
From: Christoph Hellwig <hch@....de>
To: Sagi Grimberg <sagi@...mberg.me>
Cc: Mike Snitzer <snitzer@...hat.com>, Christoph Hellwig <hch@....de>,
Johannes Thumshirn <jthumshirn@...e.de>,
Keith Busch <keith.busch@...el.com>,
Hannes Reinecke <hare@...e.de>,
Laurence Oberman <loberman@...hat.com>,
Ewan Milne <emilne@...hat.com>,
James Smart <james.smart@...adcom.com>,
Linux Kernel Mailinglist <linux-kernel@...r.kernel.org>,
Linux NVMe Mailinglist <linux-nvme@...ts.infradead.org>,
"Martin K . Petersen" <martin.petersen@...cle.com>,
Martin George <marting@...app.com>,
John Meneghini <John.Meneghini@...app.com>
Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing
On Thu, May 31, 2018 at 11:37:20AM +0300, Sagi Grimberg wrote:
>> the same host with PCI NVMe could be connected to a FC network that has
>> historically always been managed via dm-multipath.. but say that
>> FC-based infrastructure gets updated to use NVMe (to leverage a wider
>> NVMe investment, whatever?) -- but maybe admins would still prefer to
>> use dm-multipath for the NVMe over FC.
>
> You are referring to an array exposing media via nvmf and scsi
> simultaneously? I'm not sure that there is a clean definition of
> how that is supposed to work (ANA/ALUA, reservations, etc..)
It seems like this isn't what Mike wanted, but I actually got some
requests for limited support for that to do a storage live migration
from a SCSI array to NVMe. I think it is really sketchy, but if
doable if you are careful enough. It would use dm-multipath, possibly
even on top of nvme multipathing if we have multiple nvme paths.
Powered by blists - more mailing lists