lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2b4b476-7511-8f16-a13b-9e8e443e8dce@kernel.dk>
Date:   Thu, 9 Nov 2017 08:18:49 -0700
From:   Jens Axboe <axboe@...nel.dk>
To:     Christoph Hellwig <hch@....de>
Cc:     David Laight <David.Laight@...LAB.COM>,
        'Sagi Grimberg' <sagi@...mberg.me>,
        Thomas Gleixner <tglx@...utronix.de>,
        Jes Sorensen <jsorensen@...com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Saeed Mahameed <saeedm@....mellanox.co.il>,
        Networking <netdev@...r.kernel.org>,
        Leon Romanovsky <leonro@...lanox.com>,
        Saeed Mahameed <saeedm@...lanox.com>,
        Kernel Team <kernel-team@...com>
Subject: Re: mlx5 broken affinity

On 11/09/2017 03:09 AM, Christoph Hellwig wrote:
> On Wed, Nov 08, 2017 at 09:13:59AM -0700, Jens Axboe wrote:
>> There are numerous valid reasons to be able to set the affinity, for
>> both nics and block drivers. It's great that the kernel has a predefined
>> layout that works well, but users do need the flexibility to be able to
>> reconfigure affinities, to suit their needs.
> 
> Managed interrupts are about more than just setting affinities - they
> bind a a vector (which means a queue) to a specific cpu or set of cpus.
> This is extremely useful to deal with hotplug issues and also makes
> life a lot easier in general.

I know why it was done.

>> But that particular case is completely orthogonal to whether or not we
>> should allow the user to reconfigure. The answer to that is clearly YES,
>> and we should ensure that this is possible.
> 
> And why is the answer yes?  If the anser is YES it means we need explicit
> boilerplate code to deal with  cpu hotplug in every driver not using
> managed interrupts (even if optionally), so it is a non-trivial tradeoff. 

The answer is yes because as with any other kernel level policy, there
are situations where it isn't the optimal solution. I ran into this myself
with nvme recently, and I started cursing the managed setup for that
very reason. You can even argue this is a regression, since we used
to be able to move things around as we saw fit. If the answer is
boilerplate code to make it feasible, then we need that boilerplate
code.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ