[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 6 Jun 2018 11:51:30 +0200
From: Christoph Hellwig <hch@....de>
To: Roland Dreier <roland@...estorage.com>
Cc: Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
Mike Snitzer <snitzer@...hat.com>,
Johannes Thumshirn <jthumshirn@...e.de>,
Keith Busch <keith.busch@...el.com>,
Hannes Reinecke <hare@...e.de>,
Laurence Oberman <loberman@...hat.com>,
Ewan Milne <emilne@...hat.com>,
James Smart <james.smart@...adcom.com>,
Linux Kernel Mailinglist <linux-kernel@...r.kernel.org>,
Linux NVMe Mailinglist <linux-nvme@...ts.infradead.org>,
"Martin K . Petersen" <martin.petersen@...cle.com>,
Martin George <marting@...app.com>,
John Meneghini <John.Meneghini@...app.com>
Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing
On Tue, Jun 05, 2018 at 03:57:05PM -0700, Roland Dreier wrote:
> That makes sense but I'm not sure it covers everything. Probably the
> most common way to do NVMe/RDMA will be with a single HCA that has
> multiple ports, so there's no sensible CPU locality. On the other
> hand we want to keep both ports to the fabric busy. Setting different
> paths for different queues makes sense, but there may be
> single-threaded applications that want a different policy.
>
> I'm not saying anything very profound, but we have to find the right
> balance between too many and too few knobs.
Agreed. And the philosophy here is to start with a as few knobs
as possible and work from there based on actual use cases.
Single threaded applications will run into issues with general
blk-mq philosophy, so to work around that we'll need to dig deeper
and allow borrowing of other cpu queues if we want to cater for that.
Powered by blists - more mailing lists