[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180605044222.GA29384@lst.de>
Date: Tue, 5 Jun 2018 06:42:23 +0200
From: Christoph Hellwig <hch@....de>
To: Roland Dreier <roland@...estorage.com>
Cc: Sagi Grimberg <sagi@...mberg.me>,
Mike Snitzer <snitzer@...hat.com>,
Christoph Hellwig <hch@....de>,
Johannes Thumshirn <jthumshirn@...e.de>,
Keith Busch <keith.busch@...el.com>,
Hannes Reinecke <hare@...e.de>,
Laurence Oberman <loberman@...hat.com>,
Ewan Milne <emilne@...hat.com>,
James Smart <james.smart@...adcom.com>,
Linux Kernel Mailinglist <linux-kernel@...r.kernel.org>,
Linux NVMe Mailinglist <linux-nvme@...ts.infradead.org>,
"Martin K . Petersen" <martin.petersen@...cle.com>,
Martin George <marting@...app.com>,
John Meneghini <John.Meneghini@...app.com>
Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing
On Mon, Jun 04, 2018 at 02:58:49PM -0700, Roland Dreier wrote:
> We plan to implement all the fancy NVMe standards like ANA, but it
> seems that there is still a requirement to let the host side choose
> policies about how to use paths (round-robin vs least queue depth for
> example). Even in the modern SCSI world with VPD pages and ALUA,
> there are still knobs that are needed. Maybe NVMe will be different
> and we can find defaults that work in all cases but I have to admit
> I'm skeptical...
The sensible thing to do in nvme is to use different paths for
different queues. That is e.g. in the RDMA case use the HCA closer
to a given CPU by default. We might allow to override this for
cases where the is a good reason, but what I really don't want is
configurability for configurabilities sake.
Powered by blists - more mailing lists