lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f142d45d-4164-4883-ac4c-ea5b1c20c1c0@linux.intel.com>
Date: Mon, 21 Apr 2025 10:56:43 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Luo Gengkun <luogengkun@...weicloud.com>, peterz@...radead.org
Cc: acme@...nel.org, namhyung@...nel.org, mark.rutland@....com,
 alexander.shishkin@...ux.intel.com, jolsa@...nel.org, irogers@...gle.com,
 adrian.hunter@...el.com, tglx@...utronix.de, bp@...en8.de,
 dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
 linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/x86/intel: Fix lbr event can placed into non lbr
 group



On 2025-04-19 12:50 a.m., Luo Gengkun wrote:
> 
> On 2025/4/19 10:25, Luo Gengkun wrote:
>>
>> On 2025/4/14 22:29, Liang, Kan wrote:
>>>
>>> On 2025-04-12 5:14 a.m., Luo Gengkun wrote:
>>>> The following perf command can trigger a warning on
>>>> intel_pmu_lbr_counters_reorder.
>>>>
>>>>   # perf record -e "{cpu-clock,cycles/call-graph="lbr"/}" -- sleep 1
>>>>
>>>> The reason is that a lbr event are placed in non lbr group. And the
>>>> previous implememtation cannot force the leader to be a lbr event in
>>>> this
>>>> case.
>>> Perf should only force the LBR leader for the branch counters case, so
>>> perf only needs to reset the LBRs for the leader.
>>> I don't think the leader restriction should be applied to other cases.
>>
>> Yes, the commit message should be updated.  The code implementation only
>>
>> restricts the leader to be an LBRs.
>>
>>>> And is_branch_counters_group will check if the group_leader supports
>>>> BRANCH_COUNTERS.
>>>> So if a software event becomes a group_leader, which
>>>> hw.flags is -1, this check will alway pass.
>>> I think the default flags for all events is 0. Can you point me to where
>>> it is changed to -1?
>>>
>>> Thanks,
>>> Kan>
>>
>> The hw_perf_event contains a union, hw.flags is used only for hardware
>> events.
>>
>> For the software events, it uses hrtimer. Therefor, when
>> perf_swevent_init_hrtimer
>>
>> is called, it changes the value of hw.flags too.
>>
>>
>> Thanks,
>>
>> Gengkun
> 
> 
> It seems that using union is dangerous because different types of
> perf_events can be
> placed in the same group.

Only the PMU with perf_sw_context can be placed in the same group with
other types.

> Currently, a large number of codes directly
> access the hw
> of the leader, which is insecure. 

For X86, the topdown, ACR and branch counters will touch the
leader.hw->flags. The topdown and ACR have already checked the leader
before updating the flags. The branch counters missed it. I think a
check is required for the branch counters as well, which should be good
enough to address the issue.

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 16f8aea33243..406f58b3b5d4 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4256,6 +4256,12 @@ static int intel_pmu_hw_config(struct perf_event
*event)
 		 * group, which requires the extra space to store the counters.
 		 */
 		leader = event->group_leader;
+		/*
+		 * The leader's hw.flags will be used to determine a
+		 * branch counter logging group. Force it a X86 event.
+		 */
+		if (!is_x86_event(leader))
+			return -EINVAL;
 		if (branch_sample_call_stack(leader))
 			return -EINVAL;
 		if (branch_sample_counters(leader)) {

> This part of the logic needs to be
> redesigned to void
> similar problems. And I am happy to work for this.
>

OK. Please share your idea.

Thanks,
Kan
> 
> Thanks,
> Gengkun
>>>> To fix this problem, using has_branch_stack to judge if leader is lbr
>>>> event.
>>>>
>>>> Fixes: 33744916196b ("perf/x86/intel: Support branch counters logging")
>>>> Signed-off-by: Luo Gengkun <luogengkun@...weicloud.com>
>>>> ---
>>>>   arch/x86/events/intel/core.c | 14 +++++++-------
>>>>   1 file changed, 7 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/
>>>> core.c
>>>> index 09d2d66c9f21..c6b394019e54 100644
>>>> --- a/arch/x86/events/intel/core.c
>>>> +++ b/arch/x86/events/intel/core.c
>>>> @@ -4114,6 +4114,13 @@ static int intel_pmu_hw_config(struct
>>>> perf_event *event)
>>>>               event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>>       }
>>>>   +    /*
>>>> +     * Force the leader to be a LBR event. So LBRs can be reset
>>>> +     * with the leader event. See intel_pmu_lbr_del() for details.
>>>> +     */
>>>> +    if (has_branch_stack(event) && !has_branch_stack(event-
>>>> >group_leader))
>>>> +        return -EINVAL;
>>>> +
>>>>       if (branch_sample_counters(event)) {
>>>>           struct perf_event *leader, *sibling;
>>>>           int num = 0;
>>>> @@ -4157,13 +4164,6 @@ static int intel_pmu_hw_config(struct
>>>> perf_event *event)
>>>>                 ~(PERF_SAMPLE_BRANCH_PLM_ALL |
>>>>                   PERF_SAMPLE_BRANCH_COUNTERS)))
>>>>               event->hw.flags  &= ~PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>> -
>>>> -        /*
>>>> -         * Force the leader to be a LBR event. So LBRs can be reset
>>>> -         * with the leader event. See intel_pmu_lbr_del() for details.
>>>> -         */
>>>> -        if (!intel_pmu_needs_branch_stack(leader))
>>>> -            return -EINVAL;
>>>>       }
>>>>         if (intel_pmu_needs_branch_stack(event)) {
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ