[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <564EFA74.90606@imgtec.com>
Date: Fri, 20 Nov 2015 10:48:20 +0000
From: Qais Yousef <qais.yousef@...tec.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: <linux-kernel@...r.kernel.org>, <jason@...edaemon.net>,
<marc.zyngier@....com>, <jiang.liu@...ux.intel.com>,
<ralf@...ux-mips.org>, <linux-mips@...ux-mips.org>
Subject: Re: [PATCH 10/14] irqchip/mips-gic: Add a IPI hierarchy domain
Hi Thomas,
On 11/16/2015 05:17 PM, Thomas Gleixner wrote:
> 1) IPI as per_cpu interrupts
>
> Single hwirq represented by a single irq descriptor
>
> 2) IPI with consecutive mapping space
>
> No extra mapping from virq base to target cpu required as its just
> linear. Everything can be handled via the base virq.
>
I think I am seeing a major issue with this approach.
Take the case where we reserve an IPI with ipi_mask that has cpu 5 and 6
set only. When allocating a per_cpu or consectuve mapping, we will
require 2 consecutive virqs and hwirqs. But since the cpu location is
not starting from 0, we can't use the cpu as an offset anymore.
So when a user wants to send an IPI to cpu 6 only, the code can't easily
tell what's the correct offset from base virq or hwirq to use.
Same applies when doing the reverse mapping.
In other words, the ipi_mask won't always necessarily be linear to
facilitate the 1:1 mapping that this approach assumes.
It is a solvable problem, but I think we're losing the elegance that
promoted going into this direction and I think sticking to using struct
ipi_mapping (with some enhancements to how it's exposed an integrated
by/into generic code) is a better approach.
Thoughts?
I still don't have a working implementation otherwise I would have sent
my patches, but I thought I'd raise this up before I spend more time on
it unnecessarily.
Thanks,
Qais
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists