[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy3gZHnSwovxypY5vP438TNPj8h+miqtyBKhEUAdWj=htQ@mail.gmail.com>
Date: Wed, 3 Jul 2024 20:31:31 +0530
From: Anup Patel <anup@...infault.org>
To: Nam Cao <namcao@...utronix.de>
Cc: Thomas Gleixner <tglx@...utronix.de>, Paul Walmsley <paul.walmsley@...ive.com>,
Samuel Holland <samuel.holland@...ive.com>, linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org, b.spranger@...utronix.de,
Christoph Hellwig <hch@....de>, Marc Zyngier <marc.zyngier@....com>
Subject: Re: [PATCH] irqchip/sifive-plic: Fix plic_set_affinity() only enables
1 cpu
On Wed, Jul 3, 2024 at 6:03 PM Nam Cao <namcao@...utronix.de> wrote:
>
> On Wed, Jul 03, 2024 at 05:28:23PM +0530, Anup Patel wrote:
> > On Wed, Jul 3, 2024 at 12:57 PM Nam Cao <namcao@...utronix.de> wrote:
> > >
> > > plic_set_affinity() only enables interrupt for the first possible CPU in
> > > the mask. The point is to prevent all CPUs trying to claim an interrupt,
> > > but only one CPU succeeds and the other CPUs wasted some clock cycles for
> > > nothing.
> > >
> > > However, there are two problems with that:
> > > 1. Users cannot enable interrupt on multiple CPUs (for example, to minimize
> > > interrupt latency).
> >
> > Well, you are assuming that multiple CPUs are always idle or available
> > to process interrupts. In other words, if the system is loaded running
> > some workload on each CPU then performance on multiple CPUs
> > will degrade since multiple CPUs will wastefully try to claim interrupt.
> >
> > In reality, we can't make such assumptions and it is better to target a
> > particular CPU for processing interrupts (just like various other interrupt
> > controllers). For balancing interrupt processing load, we have software
> > irq balancers running in user-space (or kernel space) which do a
> > reasonably fine job of picking appropriate CPU for interrupt processing.
>
> Then we should leave the job of distributing interrupts to those tools,
> right? Not all use cases want minimally wasted CPU cycles. For example, if
> a particular interrupt does not arrive very often, but when it does, it
> needs to be handled fast; in this example, clearly enabling this interrupt
> for all CPUs is superior.
This is a very specific case which you are trying to optimize and in the
process hurting performance in many other cases. There are many high
speed IOs (network, storage, etc) where rate of interrupt is high so for
such IO your patch will degrade performance on multiple CPUs.
>
> But I am sure there are users who don't use something like irqbalance and
> just let the system do the default behavior. So I see your point of not
> wasting CPU cycles. So, how about we keep this patch, but also add a
> "default policy" which evenly distributes the interrupts to individually
> CPUs (best effort only). Something like the un-tested patch below?
I would suggest dropping this patch and for the sake of distributing
interrupts at boot time we can have the below change.
>
> Best regards,
> Nam
>
> diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
> index f30bdb94ceeb..953f375835b0 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -312,7 +312,7 @@ static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> handle_fasteoi_irq, NULL, NULL);
> irq_set_noprobe(irq);
> - irq_set_affinity(irq, &priv->lmask);
> + irq_set_affinity(irq, cpumask_of(cpumask_any_distribute(&priv->lmask)));
> return 0;
> }
>
Regards,
Anup
Powered by blists - more mailing lists