[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy3=Hz7uuT=AEJqmZ7OJZg5Zt9MSg8QiBjEzH5wcsOH1-Q@mail.gmail.com>
Date: Tue, 18 Dec 2018 16:02:09 +0530
From: Anup Patel <anup@...infault.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Palmer Dabbelt <palmer@...ive.com>,
Albert Ou <aou@...s.berkeley.edu>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Jason Cooper <jason@...edaemon.net>,
Marc Zyngier <marc.zyngier@....com>,
Atish Patra <atish.patra@....com>,
linux-riscv@...ts.infradead.org,
"linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 6/6] irqchip: sifive-plic: Implement irq_set_affinity()
for SMP host
On Tue, Dec 18, 2018 at 12:02 AM Christoph Hellwig <hch@...radead.org> wrote:
>
> On Fri, Nov 30, 2018 at 01:32:07PM +0530, Anup Patel wrote:
> > This patch provides irq_set_affinity() implementation for PLIC driver.
> > It also updates irq_enable() such that PLIC interrupts are only enabled
> > for one of CPUs specified in IRQ affinity mask.
>
> But normally our affinity masks are that - masks of CPUs that can take
> it. It seems a bit odd to then just pick the first one, as this means
> with default all-CPU masks we'll have all interrupts handled by the
> first CPU only.
Yes, affinity mask are CPUs which can take but there is also effective
affinity mask which represent CPUs which will actually receive IRQ.
Interrupt controllers (unlike PLIC) can support hardware IRQ balancing.
For such interrupt controllers, we inform all CPUs that can take IRQ but
interrupt controller will only deliver IRQ to only one of the CPUs.
There are quite a few interrupt controllers which only allow IRQ to
be taken by exactly one CPU. For such interrupt controllers, the
interrupt controller driver has to to pick one CPU out of CPUs which
can take IRQ (Example GICv2, GICv3, etc).
>
>
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -106,14 +106,42 @@ static void plic_irq_toggle(const struct cpumask *mask, int hwirq, int enable)
> >
> > static void plic_irq_enable(struct irq_data *d)
> > {
> > - plic_irq_toggle(irq_data_get_affinity_mask(d), d->hwirq, 1);
> > + unsigned int cpu = cpumask_any_and(irq_data_get_affinity_mask(d),
> > + cpu_online_mask);
> > + WARN_ON(cpu >= nr_cpu_ids);
>
> I think this should be WARN_ON_ONCE and actually return instead of then
> proceeding using the invalid cpu index.
Sure, will update.
>
> > +#ifdef CONFIG_SMP
> static int plic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
> > + bool force)
> > +{
> > + unsigned int cpu;
> > +
> > + if (!force)
> > + cpu = cpumask_any_and(mask_val, cpu_online_mask);
> > + else
> > + cpu = cpumask_first(mask_val);
>
> maybe swap the two branches around to avoid the inversion of the force
> flag?
Sure, will update.
Regards,
Anup
Powered by blists - more mailing lists