[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140223172502.GA27075@mtl-eit-vdi-22.mtl.labs.mlnx>
Date: Sun, 23 Feb 2014 19:25:04 +0200
From: Amir Vadai <amirv@...lanox.com>
To: Ben Hutchings <ben@...adent.org.uk>
Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
yevgenyp@...lanox.com, ogerlitz@...lanox.com, yuvala@...lanox.com
Subject: Re: net/mlx4: Mellanox driver update 01-01-2014
On 23/02/14 17:06 +0000, Ben Hutchings wrote:
> On Sun, 2014-02-23 at 11:01 +0200, Amir Vadai wrote:
> > On 22/02/14 01:33 +0000, Ben Hutchings wrote:
[...]
>
> Right, that does sound pretty good as a default. And I accept that it
> would be reasonable to implement that initially without a tunable beyond
> total number of IRQs.
>
> I would like to have a central mechanism for this that would allow the
> administrator to set a policy of spreading IRQs across all threads,
> cores, packages, local cores, etc. (The out-of-tree version of sfc has
> such options.) If you add the mechanism and default that you've
> proposed, then someone (maybe me) can get round to the configurable
> policy later.
>
> Ben.
>
What I'm preparing now is just a very simple helper function that
gets the local numa node and queue number as its inputs, and returns
the recommended cpumask.
A driver that would like to use it, will set using it the
affinity_hint.
This function does seem to be a convenient place for setting different
policies.
Amir.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists