lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZH3PCqYt/UzoiVx3@FVFF77S0Q05N>
Date:   Mon, 5 Jun 2023 13:05:36 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Anshuman Khandual <anshuman.khandual@....com>
Cc:     linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        will@...nel.org, catalin.marinas@....com,
        Mark Brown <broonie@...nel.org>,
        James Clark <james.clark@....com>,
        Rob Herring <robh@...nel.org>, Marc Zyngier <maz@...nel.org>,
        Suzuki Poulose <suzuki.poulose@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        linux-perf-users@...r.kernel.org
Subject: Re: [PATCH V11 05/10] arm64/perf: Add branch stack support in ARMV8
 PMU

On Wed, May 31, 2023 at 09:34:23AM +0530, Anshuman Khandual wrote:
> This enables support for branch stack sampling event in ARMV8 PMU, checking
> has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although
> these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions
> for now. While here, this also defines arm_pmu's sched_task() callback with
> armv8pmu_sched_task(), which resets the branch record buffer on a sched_in.

This generally looks good, but I have a few comments below.

[...]

> +static inline bool armv8pmu_branch_valid(struct perf_event *event)
> +{
> +	WARN_ON_ONCE(!has_branch_stack(event));
> +	return false;
> +}

IIUC this is for validating the attr, so could we please name this
armv8pmu_branch_attr_valid() ?

[...]

> +static int branch_records_alloc(struct arm_pmu *armpmu)
> +{
> +	struct pmu_hw_events *events;
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		events = per_cpu_ptr(armpmu->hw_events, cpu);
> +		events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
> +		if (!events->branches)
> +			return -ENOMEM;
> +	}
> +	return 0;

This leaks memory if any allocation fails, and the next patch replaces this
code entirely.

Please add this once in a working state. Either use the percpu allocation
trick in the next patch from the start, or have this kzalloc() with a
corresponding kfree() in an error path.

>  }
>  
>  static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
> @@ -1145,12 +1162,24 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>  	};
>  	int ret;
>  
> +	ret = armv8pmu_private_alloc(cpu_pmu);
> +	if (ret)
> +		return ret;
> +
>  	ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>  				    __armv8pmu_probe_pmu,
>  				    &probe, 1);
>  	if (ret)
>  		return ret;
>  
> +	if (arm_pmu_branch_stack_supported(cpu_pmu)) {
> +		ret = branch_records_alloc(cpu_pmu);
> +		if (ret)
> +			return ret;
> +	} else {
> +		armv8pmu_private_free(cpu_pmu);
> +	}

I see from the next patch that "private" is four ints, so please just add that
to struct arm_pmu under an ifdef CONFIG_ARM64_BRBE. That'll simplify this, and
if we end up needing more space in future we can consider factoring it out.

> +
>  	return probe.present ? 0 : -ENODEV;
>  }

It also seems odd to ceck probe.present *after* checking
arm_pmu_branch_stack_supported().

With the allocation removed I think this can be written more clearly as:

| static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
| {
|         struct armv8pmu_probe_info probe = {
|                 .pmu = cpu_pmu,
|                 .present = false,
|         };   
|         int ret; 
| 
|         ret = smp_call_function_any(&cpu_pmu->supported_cpus,
|                                     __armv8pmu_probe_pmu,
|                                     &probe, 1);
|         if (ret)
|                 return ret; 
| 
|         if (!probe.present)
|                 return -ENODEV;
| 
|         if (arm_pmu_branch_stack_supported(cpu_pmu))
|                 ret = branch_records_alloc(cpu_pmu);
|              
|         return ret; 
| }

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ