[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9dcef1c9343041c49a92ec8cd40d6331@exch03.asrmicro.com>
Date: Wed, 17 May 2023 10:45:22 +0000
From: Yan Zheng(严政) <zhengyan@...micro.com>
To: Marc Zyngier <maz@...nel.org>
CC: "tglx@...utronix.de" <tglx@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Gao Meitao(高玫涛) <meitaogao@...micro.com>,
Zhou Qiao(周侨) <qiaozhou@...micro.com>,
Zhang Zhizhou(张治洲)
<zhizhouzhang@...micro.com>
Subject: RE: [PATCH] irqchip/gic-v3: workaround for ASR8601 when reading
mpidr
> -----Original Message-----
> From: Marc Zyngier [mailto:maz@...nel.org]
> Sent: Wednesday, May 17, 2023 4:32 PM
> To: Yan Zheng(严政) <zhengyan@...micro.com>
> Cc: tglx@...utronix.de; linux-kernel@...r.kernel.org; Gao Meitao(高玫涛)
> <meitaogao@...micro.com>; Zhou Qiao(周侨) <qiaozhou@...micro.com>;
> Zhang Zhizhou(张治洲) <zhizhouzhang@...micro.com>
> Subject: Re: [PATCH] irqchip/gic-v3: workaround for ASR8601 when reading
> mpidr
>
> On Wed, 17 May 2023 08:55:00 +0100,
> zhengyan <zhengyan@...micro.com> wrote:
> >
> > This patch add workaround for ASR8601, which uses an armv8.2 processor
> > with a gic-500. ARMv8.2 uses Multiprocessor Affinity Register to
> > identify the logical address of the core by
> > | cluster | core | thread |.
>
> Not quite. The ARMv8.2 architecture doesn't say *any* of that. It is ARM's
> *implementations* that follow this scheme.
>
Really thank you for rapid response,
Yes, as arm documents https://developer.arm.com/docuentation/ka002107/latest said
It comes from armv8.2 get 3 types for affinity (arm v8.0 cpus only get 2 types)
And it's an implementations issue.
> > However, gic-500 only supports topologies with affinity levels less
> > than 2 as
> > | cluster | core|.
> >
> > So it needs this patch to shift the MPIDR values to ensure proper
> > functionality
> >
> > Signed-off-by: zhengyan <zhengyan@...micro.com>
> > ---
> > drivers/irqchip/irq-gic-v3.c | 28 +++++++++++++++++++++++++++-
> > 1 file changed, 27 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/irqchip/irq-gic-v3.c
> > b/drivers/irqchip/irq-gic-v3.c index 6fcee221f201..435b98a8641e 100644
> > --- a/drivers/irqchip/irq-gic-v3.c
> > +++ b/drivers/irqchip/irq-gic-v3.c
> > @@ -39,6 +39,7 @@
> >
> > #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
> > #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1)
> > +#define FLAGS_WORKAROUND_MPIDR_ASR8601 (1ULL << 2)
>
> What is ASR8601? Is it a system? Or an erratum number? For issues that are the
> result of a HW integration issue, please provide an official erratum number, and
> update Documentation/arm64/silicon-errata.rst.
>
ASR8601 is our soc's name, and yes it’s a kind of HW integration issue
But maybe it’s not an erratum since our HW design is like that, although
Arm doesn't recommend this way.
And I would like to add more comments
Under the next part before *desc = "GICv3: ASR 8601 MPIDR shift"*
Maybe this is a better way? Or add something under Documentation?
> >
> > #define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
> >
> > @@ -659,6 +660,9 @@ static u64 gic_mpidr_to_affinity(unsigned long
> > mpidr) {
> > u64 aff;
> >
> > + if (gic_data.flags & FLAGS_WORKAROUND_MPIDR_ASR8601)
> > + mpidr >>= 8;
> > +
> > aff = ((u64)MPIDR_AFFINITY_LEVEL(mpidr, 3) << 32 |
> > MPIDR_AFFINITY_LEVEL(mpidr, 2) << 16 |
> > MPIDR_AFFINITY_LEVEL(mpidr, 1) << 8 | @@ -970,6 +974,9
> @@
> > static int __gic_populate_rdist(struct redist_region *region, void __iomem
> *ptr)
> > * Convert affinity to a 32bit value that can be matched to
> > * GICR_TYPER bits [63:32].
> > */
> > + if (gic_data.flags & FLAGS_WORKAROUND_MPIDR_ASR8601)
> > + mpidr >>= 8;
> > +
> > aff = (MPIDR_AFFINITY_LEVEL(mpidr, 3) << 24 |
> > MPIDR_AFFINITY_LEVEL(mpidr, 2) << 16 |
> > MPIDR_AFFINITY_LEVEL(mpidr, 1) << 8 | @@ -1265,6 +1272,8
> @@
> > static u16 gic_compute_target_list(int *base_cpu, const struct cpumask
> *mask,
> > unsigned long mpidr = cpu_logical_map(cpu);
> > u16 tlist = 0;
> >
> > + if (gic_data.flags & FLAGS_WORKAROUND_MPIDR_ASR8601)
> > + mpidr >>= 8;
> > while (cpu < nr_cpu_ids) {
> > tlist |= 1 << (mpidr & 0xf);
> >
> > @@ -1274,7 +1283,8 @@ static u16 gic_compute_target_list(int *base_cpu,
> const struct cpumask *mask,
> > cpu = next_cpu;
> >
> > mpidr = cpu_logical_map(cpu);
> > -
> > + if (gic_data.flags & FLAGS_WORKAROUND_MPIDR_ASR8601)
> > + mpidr >>= 8;
> > if (cluster_id != MPIDR_TO_SGI_CLUSTER_ID(mpidr)) {
> > cpu--;
> > goto out;
> > @@ -1321,6 +1331,8 @@ static void gic_ipi_send_mask(struct irq_data *d,
> const struct cpumask *mask)
> > u64 cluster_id = MPIDR_TO_SGI_CLUSTER_ID(cpu_logical_map(cpu));
> > u16 tlist;
> >
> > + if (gic_data.flags & FLAGS_WORKAROUND_MPIDR_ASR8601)
> > + cluster_id =
> MPIDR_TO_SGI_CLUSTER_ID(cpu_logical_map(cpu) >> 8);
>
> You've written the same check 5 times. Maybe you could start by refactoring
> that code so that the hack can be in a single place?
>
Okay, I'll try to refactor it
> > tlist = gic_compute_target_list(&cpu, mask, cluster_id);
> > gic_send_sgi(cluster_id, tlist, d->hwirq);
> > }
> > @@ -1729,6 +1741,15 @@ static bool gic_enable_quirk_cavium_38539(void
> *data)
> > return true;
> > }
> >
> > +static bool gic_enable_quirk_asr8601(void *data) {
> > + struct gic_chip_data *d = data;
> > +
> > + d->flags |= FLAGS_WORKAROUND_MPIDR_ASR8601;
> > +
> > + return true;
> > +}
> > +
> > static bool gic_enable_quirk_hip06_07(void *data) {
> > struct gic_chip_data *d = data;
> > @@ -1823,6 +1844,11 @@ static const struct gic_quirk gic_quirks[] = {
> > .mask = 0xffffffff,
> > .init = gic_enable_quirk_nvidia_t241,
> > },
> > + {
> > + .desc = "GICv3: ASR 8601 MPIDR SHIFT",
>
> s/SHIFT/shift/
>
Okay
> > + .compatible = "asr,asr8601-gic-v3",
>
> So ASR8601 *is* a system... Is it DT only?
>
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
Yes, asr8601 is our soc, and we want to use compatible node in devices-tree to control it,
As I mentioned at previous part, it might works well under
armv8.2(3 types of affinity) with gic500, but these code get strongly order with HW integration
Thanks again,
Powered by blists - more mailing lists