[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1711021907420.2824@nanos>
Date: Thu, 2 Nov 2017 19:10:00 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Sagi Grimberg <sagi@...mberg.me>
cc: Jes Sorensen <jsorensen@...com>,
Tariq Toukan <tariqt@...lanox.com>,
Saeed Mahameed <saeedm@....mellanox.co.il>,
Networking <netdev@...r.kernel.org>,
Leon Romanovsky <leonro@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>,
Kernel Team <kernel-team@...com>,
Christoph Hellwig <hch@....de>
Subject: Re: mlx5 broken affinity
On Thu, 2 Nov 2017, Sagi Grimberg wrote:
>
> > This wasn't to start a debate about which allocation method is the
> > perfect solution. I am perfectly happy with the new default, the part
> > that is broken is to take away the user's option to reassign the
> > affinity. That is a bug and it needs to be fixed!
>
> Well,
>
> I would really want to wait for Thomas/Christoph to reply, but this
> simple change fixed it for me:
> --
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 573dc52b0806..eccd06be5e44 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -146,8 +146,7 @@ bool irq_can_set_affinity_usr(unsigned int irq)
> {
> struct irq_desc *desc = irq_to_desc(irq);
>
> - return __irq_can_set_affinity(desc) &&
> - !irqd_affinity_is_managed(&desc->irq_data);
> + return __irq_can_set_affinity(desc);
Which defeats the whole purpose of the managed facility, which is _not_ to
break the affinities on cpu offline and bring the interrupt back on the CPU
when it comes online again.
What I can do is to have a separate flag, which only uses the initial
distribution mechanism, but I really want to have Christophs opinion on
that.
Thanks,
tglx
Powered by blists - more mailing lists