lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL1RGDWvxcz7zAoEa-Ukhmsn=Vuj6O07X666BK+vJOdkA7qwNQ@mail.gmail.com>
Date:   Tue, 5 Jun 2018 15:57:05 -0700
From:   Roland Dreier <roland@...estorage.com>
To:     Christoph Hellwig <hch@....de>
Cc:     Sagi Grimberg <sagi@...mberg.me>,
        Mike Snitzer <snitzer@...hat.com>,
        Johannes Thumshirn <jthumshirn@...e.de>,
        Keith Busch <keith.busch@...el.com>,
        Hannes Reinecke <hare@...e.de>,
        Laurence Oberman <loberman@...hat.com>,
        Ewan Milne <emilne@...hat.com>,
        James Smart <james.smart@...adcom.com>,
        Linux Kernel Mailinglist <linux-kernel@...r.kernel.org>,
        Linux NVMe Mailinglist <linux-nvme@...ts.infradead.org>,
        "Martin K . Petersen" <martin.petersen@...cle.com>,
        Martin George <marting@...app.com>,
        John Meneghini <John.Meneghini@...app.com>
Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing

> The sensible thing to do in nvme is to use different paths for
> different queues.  That is e.g. in the RDMA case use the HCA closer
> to a given CPU by default.  We might allow to override this for
> cases where the is a good reason, but what I really don't want is
> configurability for configurabilities sake.

That makes sense but I'm not sure it covers everything.  Probably the
most common way to do NVMe/RDMA will be with a single HCA that has
multiple ports, so there's no sensible CPU locality.  On the other
hand we want to keep both ports to the fabric busy.  Setting different
paths for different queues makes sense, but there may be
single-threaded applications that want a  different policy.

I'm not saying anything very profound, but we have to find the right
balance between too many and too few knobs.

 - R.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ