[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy3FCdzLV-nH03T=PBxB2tdZXhRrugcC2NcoA=22qpv+Lw@mail.gmail.com>
Date: Sun, 26 Apr 2020 18:16:52 +0530
From: Anup Patel <anup@...infault.org>
To: Zong Li <zong.li@...ive.com>
Cc: Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
"linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>,
linux-riscv <linux-riscv@...ts.infradead.org>,
David Abdurachmanov <david.abdurachmanov@...ive.com>
Subject: Re: [PATCH] irqchip/sifive-plic: allow many cores to handle IRQs
On Sun, Apr 26, 2020 at 4:37 PM Zong Li <zong.li@...ive.com> wrote:
>
> Currently, driver forces the IRQs to be handled by only one core. This
> patch provides the way to enable others cores to handle IRQs if needed,
> so users could decide how many cores they wanted on default by boot
> argument.
>
> Use 'irqaffinity' boot argument to determine affinity. If there is no
> irqaffinity in dts or kernel configuration, use irq default affinity,
> so all harts would try to claim IRQ.
>
> For example, add irqaffinity=0 in chosen node to set irq affinity to
> hart 0. It also supports more than one harts to handle irq, such as set
> irqaffinity=0,3,4.
>
> You can change IRQ affinity from user-space using procfs. For example,
> you can make CPU0 and CPU2 serve IRQ together by the following command:
>
> echo 4 > /proc/irq/<x>/smp_affinity
>
> Signed-off-by: Zong Li <zong.li@...ive.com>
> ---
> drivers/irqchip/irq-sifive-plic.c | 21 +++++++--------------
> 1 file changed, 7 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
> index d0a71febdadc..bc1440d54185 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -111,15 +111,12 @@ static inline void plic_irq_toggle(const struct cpumask *mask,
> static void plic_irq_unmask(struct irq_data *d)
> {
> struct cpumask amask;
> - unsigned int cpu;
> struct plic_priv *priv = irq_get_chip_data(d->irq);
>
> cpumask_and(&amask, &priv->lmask, cpu_online_mask);
> - cpu = cpumask_any_and(irq_data_get_affinity_mask(d),
> - &amask);
> - if (WARN_ON_ONCE(cpu >= nr_cpu_ids))
> - return;
> - plic_irq_toggle(cpumask_of(cpu), d, 1);
> + cpumask_and(&amask, &amask, irq_data_get_affinity_mask(d));
> +
> + plic_irq_toggle(&amask, d, 1);
> }
>
> static void plic_irq_mask(struct irq_data *d)
> @@ -133,24 +130,20 @@ static void plic_irq_mask(struct irq_data *d)
> static int plic_set_affinity(struct irq_data *d,
> const struct cpumask *mask_val, bool force)
> {
> - unsigned int cpu;
> struct cpumask amask;
> struct plic_priv *priv = irq_get_chip_data(d->irq);
>
> cpumask_and(&amask, &priv->lmask, mask_val);
>
> if (force)
> - cpu = cpumask_first(&amask);
> + cpumask_copy(&amask, mask_val);
> else
> - cpu = cpumask_any_and(&amask, cpu_online_mask);
> -
> - if (cpu >= nr_cpu_ids)
> - return -EINVAL;
> + cpumask_and(&amask, &amask, cpu_online_mask);
>
> plic_irq_toggle(&priv->lmask, d, 0);
> - plic_irq_toggle(cpumask_of(cpu), d, 1);
> + plic_irq_toggle(&amask, d, 1);
>
> - irq_data_update_effective_affinity(d, cpumask_of(cpu));
> + irq_data_update_effective_affinity(d, &amask);
>
> return IRQ_SET_MASK_OK_DONE;
> }
> --
> 2.26.1
>
I strongly oppose (NACK) this patch due to performance reasons.
In PLIC, if we enable an IRQ X for N CPUs then when IRQ X occurs:
1) All N CPUs will take interrupt
2) All N CPUs will try to read PLIC CLAIM register
3) Only one of the CPUs will see IRQ X using the CLAIM register
but other N - 1 CPUs will see no interrupt and return back to what
they were doing. In other words, N - 1 CPUs will just waste CPU
every time IRQ X occurs.
Example1, one Application doing heavy network traffic will
degrade performance of other applications because with every
network RX/TX interrupt N-1 CPUs will waste CPU trying to
process network interrupt.
Example1, one Application doing heavy MMC/SD traffic will
degrade performance of other applications because with every
SPI read/write interrupt N-1 CPUs will waste CPU trying to
process it.
In fact, the current PLIC approach is actually a performance
optimization. This implementation also works fine with in-kernel
load-balancer and user space load balancer.
Regards,
Anup
Powered by blists - more mailing lists