lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3f5f69e4-9324-152c-6581-8855a3dbb221@arm.com>
Date:   Fri, 9 Jun 2023 10:54:33 +0100
From:   Suzuki K Poulose <suzuki.poulose@....com>
To:     Anshuman Khandual <anshuman.khandual@....com>,
        Mark Rutland <mark.rutland@....com>
Cc:     linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        will@...nel.org, catalin.marinas@....com,
        Mark Brown <broonie@...nel.org>,
        James Clark <james.clark@....com>,
        Rob Herring <robh@...nel.org>, Marc Zyngier <maz@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        linux-perf-users@...r.kernel.org
Subject: Re: [PATCH V11 05/10] arm64/perf: Add branch stack support in ARMV8
 PMU

On 09/06/2023 05:00, Anshuman Khandual wrote:
> 
> 
> On 6/8/23 15:43, Suzuki K Poulose wrote:
>> On 06/06/2023 11:34, Anshuman Khandual wrote:
>>>
>>>
>>> On 6/5/23 17:35, Mark Rutland wrote:
>>>> On Wed, May 31, 2023 at 09:34:23AM +0530, Anshuman Khandual wrote:
>>>>> This enables support for branch stack sampling event in ARMV8 PMU, checking
>>>>> has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although
>>>>> these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions
>>>>> for now. While here, this also defines arm_pmu's sched_task() callback with
>>>>> armv8pmu_sched_task(), which resets the branch record buffer on a sched_in.
>>>>
>>>> This generally looks good, but I have a few comments below.
>>>>
>>>> [...]
>>>>
>>>>> +static inline bool armv8pmu_branch_valid(struct perf_event *event)
>>>>> +{
>>>>> +    WARN_ON_ONCE(!has_branch_stack(event));
>>>>> +    return false;
>>>>> +}
>>>>
>>>> IIUC this is for validating the attr, so could we please name this
>>>> armv8pmu_branch_attr_valid() ?
>>>
>>> Sure, will change the name and updated call sites.
>>>
>>>>
>>>> [...]
>>>>
>>>>> +static int branch_records_alloc(struct arm_pmu *armpmu)
>>>>> +{
>>>>> +    struct pmu_hw_events *events;
>>>>> +    int cpu;
>>>>> +
>>>>> +    for_each_possible_cpu(cpu) {
>>
>> Shouldn't this be supported_pmus ? i.e.
>>      for_each_cpu(cpu, &armpmu->supported_cpus) {
>>
>>
>>>>> +        events = per_cpu_ptr(armpmu->hw_events, cpu);
>>>>> +        events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
>>>>> +        if (!events->branches)
>>>>> +            return -ENOMEM;
>>
>> Do we need to free the allocated branches already ?
> 
> This gets fixed in the next patch via per-cpu allocation. I will
> move and fold the code block in here. Updated function will look
> like the following.
> 
> static int branch_records_alloc(struct arm_pmu *armpmu)
> {
>          struct branch_records __percpu *records;
>          int cpu;
> 
>          records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL);
>          if (!records)
>                  return -ENOMEM;
> 
>          /*
>           * FIXME: Memory allocated via records gets completely
>           * consumed here, never required to be freed up later. Hence
>           * losing access to on stack 'records' is acceptable.
>           * Otherwise this alloc handle has to be saved some where.
>           */
>          for_each_possible_cpu(cpu) {
>                  struct pmu_hw_events *events_cpu;
>                  struct branch_records *records_cpu;
> 
>                  events_cpu = per_cpu_ptr(armpmu->hw_events, cpu);
>                  records_cpu = per_cpu_ptr(records, cpu);
>                  events_cpu->branches = records_cpu;
>          }
>          return 0;
> }
> 
> Regarding the cpumask argument in for_each_cpu().
> 
> - hw_events is a __percpu pointer in struct arm_pmu
> 
> 	- pmu->hw_events = alloc_percpu_gfp(struct pmu_hw_events, GFP_KERNEL)
> 
> 
> - 'records' above is being allocated via alloc_percpu_gfp()
> 
> 	- records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL)


> 
> If 'armpmu->supported_cpus' mask gets used instead of possible cpu mask,
> would not there be some dangling per-cpu branch_record allocated areas,
> that remain unsigned ? Assigning all of them back into hw_events should
> be harmless.

Thats because you are using alloc_percpu for records ? With the current
proposed code, if there are two different arm_pmus on the system, you
would end up wasting 1xper_cpu branch_records ? And if there are 3,
2xper_cpu gets wasted ?

> 
>>
>>>>> +    }
>>
>>
>> May be:
>>      int ret = 0;
>>
>>      for_each_cpu(cpu, &armpmu->supported_cpus) {
>>          events = per_cpu_ptr(armpmu->hw_events, cpu);
>>          events->branches = kzalloc(sizeof(struct         branch_records), GFP_KERNEL);
>>         
>>          if (!events->branches) {
>>              ret = -ENOMEM;
>>              break;
>>          }
>>      }
>>
>>      if (!ret)
>>          return 0;
>>
>>      for_each_cpu(cpu, &armpmu->supported_cpus) {
>>          events = per_cpu_ptr(armpmu->hw_events, cpu);
>>          if (!events->branches)
>>              break;
>>          kfree(events->branches);
>>      }
>>      return ret;
>>      
>>>>> +    return 0;
>>>>
>>>> This leaks memory if any allocation fails, and the next patch replaces this
>>>> code entirely.
>>>
>>> Okay.
>>>
>>>>
>>>> Please add this once in a working state. Either use the percpu allocation
>>>> trick in the next patch from the start, or have this kzalloc() with a
>>>> corresponding kfree() in an error path.
>>>
>>> I will change branch_records_alloc() as suggested in the next patch's thread
>>> and fold those changes here in this patch.
>>>
>>>>
>>>>>    }
>>>>>      static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>>>> @@ -1145,12 +1162,24 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>>>>        };
>>>>>        int ret;
>>>>>    +    ret = armv8pmu_private_alloc(cpu_pmu);
>>>>> +    if (ret)
>>>>> +        return ret;
>>>>> +
>>>>>        ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>>>>>                        __armv8pmu_probe_pmu,
>>>>>                        &probe, 1);
>>>>>        if (ret)
>>>>>            return ret;
>>>>>    +    if (arm_pmu_branch_stack_supported(cpu_pmu)) {
>>>>> +        ret = branch_records_alloc(cpu_pmu);
>>>>> +        if (ret)
>>>>> +            return ret;
>>>>> +    } else {
>>>>> +        armv8pmu_private_free(cpu_pmu);
>>>>> +    }
>>>>
>>>> I see from the next patch that "private" is four ints, so please just add that
>>>> to struct arm_pmu under an ifdef CONFIG_ARM64_BRBE. That'll simplify this, and
>>>> if we end up needing more space in future we can consider factoring it out.
>>>
>>> struct arm_pmu {
>>>      ........................................
>>>           /* Implementation specific attributes */
>>>           void            *private;
>>> }
>>>
>>> private pointer here creates an abstraction for given pmu implementation
>>> to hide attribute details without making it known to core arm pmu layer.
>>> Although adding ifdef CONFIG_ARM64_BRBE solves the problem as mentioned
>>> above, it does break that abstraction. Currently arm_pmu layer is aware
>>> about 'branch records' but not about BRBE in particular which the driver
>>> adds later on. I suggest we should not break that abstraction.
>>>
>>> Instead a global 'static struct brbe_hw_attr' in drivers/perf/arm_brbe.c
>>> can be initialized into arm_pmu->private during armv8pmu_branch_probe(),
>>> which will also solve the allocation-free problem. Also similar helpers
>>> armv8pmu_task_ctx_alloc()/free() could be defined to manage task context
>>> cache i.e arm_pmu->pmu.task_ctx_cache independently.
>>>
>>> But now armv8pmu_task_ctx_alloc() can be called after pmu probe confirms
>>> to have arm_pmu->has_branch_stack.
>>>
>>>>
>>>>> +
>>>>>        return probe.present ? 0 : -ENODEV;
>>>>>    }
>>>>
>>>> It also seems odd to ceck probe.present *after* checking
>>>> arm_pmu_branch_stack_supported().
>>>
>>> I will reorganize as suggested below.
>>>
>>>>
>>>> With the allocation removed I think this can be written more clearly as:
>>>>
>>>> | static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>>> | {
>>>> |         struct armv8pmu_probe_info probe = {
>>>> |                 .pmu = cpu_pmu,
>>>> |                 .present = false,
>>>> |         };
>>>> |         int ret;
>>>> |
>>>> |         ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>>>> |                                     __armv8pmu_probe_pmu,
>>>> |                                     &probe, 1);
>>>> |         if (ret)
>>>> |                 return ret; > |
>>>> |         if (!probe.present)
>>>> |                 return -ENODEV;
>>>> |
>>>> |         if (arm_pmu_branch_stack_supported(cpu_pmu))
>>>> |                 ret = branch_records_alloc(cpu_pmu);
>>>> |
>>>> |         return ret;
>>>> | }
>>
>> Could we not simplify this as below and keep the abstraction, since we
>> already have it ?
> 
> No, there is an allocation dependency before the smp call context.

Ok, I wasn't aware of that. Could we not read whatever we need to know
about the brbe in armv8pmu_probe_info and process it at the caller here?
And then do the the private_alloc etc as we need ?

Suzuki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ