lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 7 Jun 2024 15:13:53 -0400
From: George Kennedy <george.kennedy@...cle.com>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: harshit.m.mogalapalli@...cle.com, peterz@...radead.org, mingo@...hat.com,
        acme@...nel.org, namhyung@...nel.org, mark.rutland@....com,
        alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
        irogers@...gle.com, adrian.hunter@...el.com, kan.liang@...ux.intel.com,
        tglx@...utronix.de, bp@...en8.de, dave.hansen@...ux.intel.com,
        x86@...nel.org, hpa@...or.com, linux-perf-users@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/x86/amd: check event before enable to avoid GPF

Hi Ravi,

On 6/4/2024 9:40 AM, Ravi Bangoria wrote:
>> On 6/4/2024 9:16 AM, Ravi Bangoria wrote:
>>>>>>> Events can be deleted and the entry can be NULL.
>>>>>> Can you please also explain "how".
>>>>> It looks like x86_pmu_stop() is clearing the bit in active_mask and setting the events entry to NULL (and doing it in the correct order) for the same events index that amd_pmu_enable_all() is trying to enable.
>>>>>>> Check event for NULL in amd_pmu_enable_all() before enable to avoid a GPF.
>>>>>>> This appears to be an AMD only issue.
>>>>>>>
>>>>>>> Syzkaller reported a GPF in amd_pmu_enable_all.
>>>>>> Can you please provide a bug report link? Also, any reproducer?
>>>>> The Syzkaller reproducer can be found in this link:
>>>>> https://lore.kernel.org/netdev/CAMt6jhyec7-TSFpr3F+_ikjpu39WV3jnCBBGwpzpBrPx55w20g@mail.gmail.com/T/#u
>>>>>>> @@ -760,7 +760,8 @@ static void amd_pmu_enable_all(int added)
>>>>>>>             if (!test_bit(idx, cpuc->active_mask))
>>>>>>>                 continue;
>>>>>>>     -        amd_pmu_enable_event(cpuc->events[idx]);
>>>>>>> +        if (cpuc->events[idx])
>>>>>>> +            amd_pmu_enable_event(cpuc->events[idx]);
>>>>>> What if cpuc->events[idx] becomes NULL after if (cpuc->events[idx]) but
>>>>>> before amd_pmu_enable_event(cpuc->events[idx])?
>>>>> Good question, but the crash has not reproduced with the proposed fix in hours of testing. It usually reproduces within minutes without the fix.
>>>> Also, a similar fix is done in __intel_pmu_enable_all() in arch/x86/events/intel/core.c except that a WARN_ON_ONCE is done as well.
>>>> See: https://elixir.bootlin.com/linux/v6.10-rc1/source/arch/x86/events/intel/core.c#L2256
>>> There are subtle differences between Intel and AMD pmu implementation.
>>> __intel_pmu_enable_all() enables all event with single WRMSR whereas
>>> amd_pmu_enable_all() loops over each PMC and enables it individually.
>>>
>>> The WARN_ON_ONCE() is important because it will warn about potential
>>> sw bug somewhere else.
>> We could add a similar WARN_ON_ONCE() to the proposed patch.
> Sure, that would help in future. But for current splat, can you please
> try to rootcause the underlying race condition?

Were you able to reproduce the crash on the AMD machine?

Thanks,
George
> Thanks,
> Ravi


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ