[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1511202113010.3931@nanos>
Date: Fri, 20 Nov 2015 21:39:21 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Qais Yousef <qais.yousef@...tec.com>
cc: linux-kernel@...r.kernel.org, jason@...edaemon.net,
marc.zyngier@....com, jiang.liu@...ux.intel.com,
ralf@...ux-mips.org, linux-mips@...ux-mips.org
Subject: Re: [PATCH 10/14] irqchip/mips-gic: Add a IPI hierarchy domaind
Qais,
On Fri, 20 Nov 2015, Qais Yousef wrote:
> On 11/16/2015 05:17 PM, Thomas Gleixner wrote:
> > 1) IPI as per_cpu interrupts
> >
> > Single hwirq represented by a single irq descriptor
> >
> > 2) IPI with consecutive mapping space
> >
> > No extra mapping from virq base to target cpu required as its just
> > linear. Everything can be handled via the base virq.
> >
>
>
> I think I am seeing a major issue with this approach.
>
> Take the case where we reserve an IPI with ipi_mask that has cpu 5 and 6 set
> only. When allocating a per_cpu or consectuve mapping, we will require 2
> consecutive virqs and hwirqs. But since the cpu location is not starting from
> 0, we can't use the cpu as an offset anymore.
>
> So when a user wants to send an IPI to cpu 6 only, the code can't easily tell
> what's the correct offset from base virq or hwirq to use.
Well, you can store the start offset easily and subtract it. It's 0
for most of the cases.
> Same applies when doing the reverse mapping.
>
> In other words, the ipi_mask won't always necessarily be linear to facilitate
> the 1:1 mapping that this approach assumes.
>
> It is a solvable problem, but I think we're losing the elegance that promoted
> going into this direction and I think sticking to using struct ipi_mapping
> (with some enhancements to how it's exposed an integrated by/into generic
> code) is a better approach.
The only reason to use the ipi_mapping thing is if we need non
consecutive masks, i.e. cpu 5 and 9.
I really don't want to have it mandatory as it does not make any sense
for systems where the IPI is a single per_cpu interrupt. For the
linear consecutive space it is just adding memory and cache footprint
for no benefit. Think about machines with 4k and more cpus ....
If you make ipi_mapping in a way that it can express the per_cpu,
linear and scattered mappings, then we should be fine. The extra
conditional you need in send_ipi() is not a problem.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists