[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190820150511.GD11202@localhost.localdomain>
Date: Tue, 20 Aug 2019 09:05:11 -0600
From: Keith Busch <kbusch@...nel.org>
To: John Garry <john.garry@...wei.com>
Cc: Ming Lei <tom.leiming@...il.com>,
"longli@...uxonhyperv.com" <longli@...uxonhyperv.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Busch, Keith" <keith.busch@...el.com>, Jens Axboe <axboe@...com>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
linux-nvme <linux-nvme@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Long Li <longli@...rosoft.com>,
Thomas Gleixner <tglx@...utronix.de>,
chenxiang <chenxiang66@...ilicon.com>
Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
On Tue, Aug 20, 2019 at 01:59:32AM -0700, John Garry wrote:
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index e8f7f179bf77..cb483a055512 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -966,9 +966,13 @@ irq_thread_check_affinity(struct irq_desc *desc,
> struct irqaction *action)
> * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out.
> */
> if (cpumask_available(desc->irq_common_data.affinity)) {
> + struct irq_data *irq_data = &desc->irq_data;
> const struct cpumask *m;
>
> - m = irq_data_get_effective_affinity_mask(&desc->irq_data);
> + if (action->flags & IRQF_IRQ_AFFINITY)
> + m = desc->irq_common_data.affinity;
> + else
> + m = irq_data_get_effective_affinity_mask(irq_data);
> cpumask_copy(mask, m);
> } else {
> valid = false;
> --
> 2.17.1
>
> As Ming mentioned in that same thread, we could even make this policy
> for managed interrupts.
Ack, I really like this option!
Powered by blists - more mailing lists