[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f2971d1c-50f8-bf5a-8b16-8d84a631b0ba@huawei.com>
Date: Wed, 1 Apr 2020 12:33:21 +0100
From: John Garry <john.garry@...wei.com>
To: Marc Zyngier <maz@...nel.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>
CC: Jason Cooper <jason@...edaemon.net>,
chenxiang <chenxiang66@...ilicon.com>,
Robin Murphy <robin.murphy@....com>,
"luojiaxing@...wei.com" <luojiaxing@...wei.com>,
Ming Lei <ming.lei@...hat.com>,
Zhou Wang <wangzhou1@...ilicon.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH v3 0/2] irqchip/gic-v3-its: Balance LPI affinity across
CPUs
Hi Marc,
> But I would also like to report some other unexpected behaviour for
> managed interrupts in this series - I'll reply directly to the specific
> patch for that.
>
So I made this change:
diff --git a/drivers/irqchip/irq-gic-v3-its.c
b/drivers/irqchip/irq-gic-v3-its.c
index 9199fb53c75c..ebbfc8d44d35 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -1539,6 +1539,8 @@ static int its_set_affinity(struct irq_data *d,
const struct cpumask *mask_val,
if (irqd_is_forwarded_to_vcpu(d))
return -EINVAL;
+ its_dec_lpi_count(d, its_dev->event_map.col_map[id]);
+
if (!force)
cpu = its_select_cpu(d, mask_val);
else
@@ -1549,14 +1551,14 @@ static int its_set_affinity(struct irq_data *d,
const struct cpumask *mask_val,
/* don't set the affinity when the target cpu is same as
current one */
if (cpu != its_dev->event_map.col_map[id]) {
- its_inc_lpi_count(d, cpu);
- its_dec_lpi_count(d, its_dev->event_map.col_map[id]);
target_col = &its_dev->its->collections[cpu];
its_send_movi(its_dev, target_col, id);
its_dev->event_map.col_map[id] = cpu;
irq_data_update_effective_affinity(d, cpumask_of(cpu));
}
+ its_inc_lpi_count(d, cpu);
+
return IRQ_SET_MASK_OK_DONE;
}
Results look ok:
nvme.use_threaded_interrupts=1 =0*
Before 950K IOPs 1000K IOPs
After 1100K IOPs 1150K IOPs
* as mentioned before, this is quite unstable and causes lockups. JFYI,
there was an attempt to fix this:
https://lore.kernel.org/linux-nvme/20191209175622.1964-1-kbusch@kernel.org/
Thanks,
John
Powered by blists - more mailing lists