lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Oct 2023 19:22:55 +0800
From:   "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To:     Jim Mattson <jmattson@...gle.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Zhenyu Wang <zhenyuw@...ux.intel.com>,
        Zhang Xiong <xiong.y.zhang@...el.com>,
        Mingwei Zhang <mizhang@...gle.com>,
        Like Xu <like.xu.linux@...il.com>,
        Dapeng Mi <dapeng1.mi@...el.com>
Subject: Re: [kvm-unit-tests Patch 2/5] x86: pmu: Change the minimum value of
 llc_misses event to 0


On 10/24/2023 9:03 PM, Jim Mattson wrote:
> On Tue, Oct 24, 2023 at 12:51 AM Dapeng Mi <dapeng1.mi@...ux.intel.com> wrote:
>> Along with the CPU HW's upgrade and optimization, the count of LLC
>> misses event for running loop() helper could be 0 just like seen on
>> Sapphire Rapids.
>>
>> So modify the lower limit of possible count range for LLC misses
>> events to 0 to avoid LLC misses event test failure on Sapphire Rapids.
> I'm not convinced that these tests are really indicative of whether or
> not the PMU is working properly. If 0 is allowed for llc misses, for
> instance, doesn't this sub-test pass even when the PMU is disabled?
>
> Surely, we can do better.


Considering the testing workload is just a simple adding loop, it's 
reasonable and possible that it gets a 0 result for LLC misses and 
branch misses events. Yeah, I agree the 0 count makes the results not so 
credible. If we want to avoid these 0 count values, we may have to 
complicate the workload, such as adding flush cache instructions, or 
something like that (I'm not sure if there are instructions which can 
force branch misses). How's your idea about this?


>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
>> ---
>>   x86/pmu.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/x86/pmu.c b/x86/pmu.c
>> index 0def28695c70..7443fdab5c8a 100644
>> --- a/x86/pmu.c
>> +++ b/x86/pmu.c
>> @@ -35,7 +35,7 @@ struct pmu_event {
>>          {"instructions", 0x00c0, 10*N, 10.2*N},
>>          {"ref cycles", 0x013c, 1*N, 30*N},
>>          {"llc references", 0x4f2e, 1, 2*N},
>> -       {"llc misses", 0x412e, 1, 1*N},
>> +       {"llc misses", 0x412e, 0, 1*N},
>>          {"branches", 0x00c4, 1*N, 1.1*N},
>>          {"branch misses", 0x00c5, 0, 0.1*N},
>>   }, amd_gp_events[] = {
>> --
>> 2.34.1
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ