lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230811110035.GA6993@willie-the-truck>
Date:   Fri, 11 Aug 2023 12:00:35 +0100
From:   Will Deacon <will@...nel.org>
To:     Anshuman Khandual <anshuman.khandual@....com>
Cc:     linux-arm-kernel@...ts.infradead.org, suzuki.poulose@....com,
        yangyicong@...wei.com, Sami Mujawar <sami.mujawar@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Mark Rutland <mark.rutland@....com>,
        Mike Leach <mike.leach@...aro.org>,
        Leo Yan <leo.yan@...aro.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        James Clark <james.clark@....com>, coresight@...ts.linaro.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH V4 1/4] arm_pmu: acpi: Refactor
 arm_spe_acpi_register_device()

On Fri, Aug 11, 2023 at 03:55:43PM +0530, Anshuman Khandual wrote:
> 
> 
> On 8/11/23 15:42, Will Deacon wrote:
> > On Fri, Aug 11, 2023 at 02:13:42PM +0530, Anshuman Khandual wrote:
> >> On 8/8/23 13:52, Anshuman Khandual wrote:
> >>> +	/*
> >>> +	 * Sanity check all the GICC tables for the same interrupt
> >>> +	 * number. For now, only support homogeneous ACPI machines.
> >>> +	 */
> >>> +	for_each_possible_cpu(cpu) {
> >>> +		struct acpi_madt_generic_interrupt *gicc;
> >>> +
> >>> +		gicc = acpi_cpu_get_madt_gicc(cpu);
> >>> +		if (gicc->header.length < len)
> >>> +			return gsi ? -ENXIO : 0;
> >>> +
> >>> +		this_gsi = parse_gsi(gicc);
> >>> +		if (!this_gsi)
> >>> +			return gsi ? -ENXIO : 0;
> >>> +
> >>> +		this_hetid = find_acpi_cpu_topology_hetero_id(cpu);
> >>> +		if (!gsi) {
> >>> +			hetid = this_hetid;
> >>> +			gsi = this_gsi;
> >>> +		} else if (hetid != this_hetid || gsi != this_gsi) {
> >>> +			pr_warn("ACPI: %s: must be homogeneous\n", pdev->name);
> >>> +			return -ENXIO;
> >>> +		}
> >>> +	}
> >>
> >> As discussed on the previous version i.e V3 thread, will move the
> >> 'this_gsi' check after parse_gsi(), inside if (!gsi) conditional
> >> block. This will treat subsequent cpu parse_gsi()'s failure as a
> >> mismatch thus triggering the pr_warn() message.
> >>
> >> diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c
> >> index 845683ca7c64..6eae772d6298 100644
> >> --- a/drivers/perf/arm_pmu_acpi.c
> >> +++ b/drivers/perf/arm_pmu_acpi.c
> >> @@ -98,11 +98,11 @@ arm_acpi_register_pmu_device(struct platform_device *pdev, u8 len,
> >>                         return gsi ? -ENXIO : 0;
> >>  
> >>                 this_gsi = parse_gsi(gicc);
> >> -               if (!this_gsi)
> >> -                       return gsi ? -ENXIO : 0;
> >> -
> >>                 this_hetid = find_acpi_cpu_topology_hetero_id(cpu);
> >>                 if (!gsi) {
> >> +                       if (!this_gsi)
> >> +                               return 0;
> > 
> > Why do you need this hunk?
> 
> Otherwise '0' gsi on all cpus would just clear the above homogeneity
> test, and end up in acpi_register_gsi() making it fail, but with the
> following warning before returning with -ENXIO.
> 
> irq = acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_HIGH);
> if (irq < 0) {
> 	pr_warn("ACPI: %s Unable to register interrupt: %d\n", pdev->name, gsi);
> 	return -ENXIO;
> }

Ah gotcha, thanks.

> Is this behaviour better than returning 0 after detecting '0' gsi in
> the first cpu to avoid the above mentioned scenario ? Although 0 gsi
> followed by non-zero ones will still end up warning about a mismatch.

Can we move the check _after_ the loop, then? That way, we still detect
mismatches but we'll quietly return 0 if nobody has an interrupt.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ