lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28796dd3-ac4e-4a38-b9e1-f79533b2a798@linux.intel.com>
Date:   Wed, 1 Nov 2023 10:33:45 +0800
From:   "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To:     Jim Mattson <jmattson@...gle.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Zhenyu Wang <zhenyuw@...ux.intel.com>,
        Zhang Xiong <xiong.y.zhang@...el.com>,
        Mingwei Zhang <mizhang@...gle.com>,
        Like Xu <like.xu.linux@...il.com>,
        Dapeng Mi <dapeng1.mi@...el.com>
Subject: Re: [kvm-unit-tests Patch v2 4/5] x86: pmu: Support validation for
 Intel PMU fixed counter 3


On 11/1/2023 2:47 AM, Jim Mattson wrote:
> On Tue, Oct 31, 2023 at 2:22 AM Dapeng Mi <dapeng1.mi@...ux.intel.com> wrote:
>> Intel CPUs, like Sapphire Rapids, introduces a new fixed counter
>> (fixed counter 3) to counter/sample topdown.slots event, but current
>> code still doesn't cover this new fixed counter.
>>
>> So this patch adds code to validate this new fixed counter can count
>> slots event correctly.
> I'm not convinced that this actually validates anything.
>
> Suppose, for example, that KVM used fixed counter 1 when the guest
> asked for fixed counter 3. Wouldn't this test still pass?


Per my understanding, as long as the KVM returns a valid count in the 
reasonable count range, we can think KVM works correctly. We don't need 
to entangle on how KVM really uses the HW, it could be impossible and 
unnecessary.

Yeah, currently the predefined valid count range may be some kind of 
loose since I want to cover as much as hardwares and avoid to cause 
regression. Especially after introducing the random jump and clflush 
instructions, the cycles and slots become much more hard to predict. 
Maybe we can have a comparable restricted count range in the initial 
change, and we can loosen the restriction then if we encounter a failure 
on some specific hardware. do you think it's better? Thanks.


>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
>> ---
>>   x86/pmu.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/x86/pmu.c b/x86/pmu.c
>> index 6bd8f6d53f55..404dc7b62ac2 100644
>> --- a/x86/pmu.c
>> +++ b/x86/pmu.c
>> @@ -47,6 +47,7 @@ struct pmu_event {
>>          {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N},
>>          {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 500*N},
>>          {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 300*N},
>> +       {"fixed 4", MSR_CORE_PERF_FIXED_CTR0 + 3, 1*N, 5000*N},
>>   };
>>
>>   char *buf;
>> --
>> 2.34.1
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ