lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 13 Feb 2020 23:11:51 +0800 From: Zenghui Yu <yuzenghui@...wei.com> To: Marc Zyngier <maz@...nel.org> CC: <linux-kernel@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>, <kvmarm@...ts.cs.columbia.edu>, <tglx@...utronix.de>, <jason@...edaemon.net>, <wanghaibin.wang@...wei.com> Subject: Re: [PATCH v2 3/6] irqchip/gic-v4.1: Ensure L2 vPE table is allocated at RD level Hi Marc, On 2020/2/13 22:22, Marc Zyngier wrote: > Hi Zenghui, > > On 2020-02-06 07:57, Zenghui Yu wrote: >> In GICv4, we will ensure that level2 vPE table memory is allocated >> for the specified vpe_id on all v4 ITS, in its_alloc_vpe_table(). >> This still works well for the typical GICv4.1 implementation, where >> the new vPE table is shared between the ITSs and the RDs. >> >> To make it explicit, let us introduce allocate_vpe_l2_table() to >> make sure that the L2 tables are allocated on all v4.1 RDs. We're >> likely not need to allocate memory in it because the vPE table is >> shared and (L2 table is) already allocated at ITS level, except >> for the case where the ITS doesn't share anything (say SVPET == 0, >> practically unlikely but architecturally allowed). >> >> The implementation of allocate_vpe_l2_table() is mostly copied from >> its_alloc_table_entry(). >> >> Signed-off-by: Zenghui Yu <yuzenghui@...wei.com> >> --- >> drivers/irqchip/irq-gic-v3-its.c | 80 ++++++++++++++++++++++++++++++++ >> 1 file changed, 80 insertions(+) >> >> diff --git a/drivers/irqchip/irq-gic-v3-its.c >> b/drivers/irqchip/irq-gic-v3-its.c >> index 0f1fe56ce0af..ae4e7b355b46 100644 >> --- a/drivers/irqchip/irq-gic-v3-its.c >> +++ b/drivers/irqchip/irq-gic-v3-its.c >> @@ -2443,6 +2443,72 @@ static u64 >> inherit_vpe_l1_table_from_rd(cpumask_t **mask) >> return 0; >> } >> >> +static bool allocate_vpe_l2_table(int cpu, u32 id) >> +{ >> + void __iomem *base = gic_data_rdist_cpu(cpu)->rd_base; >> + u64 val, gpsz, npg; >> + unsigned int psz, esz, idx; >> + struct page *page; >> + __le64 *table; >> + >> + if (!gic_rdists->has_rvpeid) >> + return true; >> + >> + val = gits_read_vpropbaser(base + SZ_128K + GICR_VPROPBASER); > > Having rebased the rest of the GICv4.1 series on top of -rc1, I've hit a > small > issue right here. I run a FVP model that only spawns 4 CPUs, while the > DT has > 8 of them. This means that online_cpus = 4, and possible_cpus = 8. > > So in my case, half of the RDs have base == NULL, and things stop quickly. Ah, so this may also be why we check '!base' for each possible CPU in inherit_vpe_l1_table_from_rd(). I didn't think about it at that time. > > I plan to queue the following: > > diff --git a/drivers/irqchip/irq-gic-v3-its.c > b/drivers/irqchip/irq-gic-v3-its.c > index d85dc8dcb0ad..7656b353a95f 100644 > --- a/drivers/irqchip/irq-gic-v3-its.c > +++ b/drivers/irqchip/irq-gic-v3-its.c > @@ -2526,6 +2526,10 @@ static bool allocate_vpe_l2_table(int cpu, u32 id) > if (!gic_rdists->has_rvpeid) > return true; > > + /* Skip non-present CPUs */ > + if (!base) > + return true; > + > val = gicr_read_vpropbaser(base + SZ_128K + GICR_VPROPBASER); > > esz = FIELD_GET(GICR_VPROPBASER_4_1_ENTRY_SIZE, val) + 1; Thanks for fixing the second bug for this patch :-) Zenghui
Powered by blists - more mailing lists