lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5C259CBA.4030805@intel.com>
Date:   Fri, 28 Dec 2018 11:47:06 +0800
From:   Wei Wang <wei.w.wang@...el.com>
To:     Andi Kleen <ak@...ux.intel.com>
CC:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        pbonzini@...hat.com, peterz@...radead.org, kan.liang@...el.com,
        mingo@...hat.com, rkrcmar@...hat.com, like.xu@...el.com,
        jannh@...gle.com, arei.gonglei@...wei.com
Subject: Re: [PATCH v4 10/10] KVM/x86/lbr: lazy save the guest lbr stack

On 12/28/2018 04:51 AM, Andi Kleen wrote:
> Thanks. This looks a lot better than the earlier versions.
>
> Some more comments.
>
> On Wed, Dec 26, 2018 at 05:25:38PM +0800, Wei Wang wrote:
>> When the vCPU is scheduled in:
>> - if the lbr feature was used in the last vCPU time slice, set the lbr
>>    stack to be interceptible, so that the host can capture whether the
>>    lbr feature will be used in this time slice;
>> - if the lbr feature wasn't used in the last vCPU time slice, disable
>>    the vCPU support of the guest lbr switching.
> time slice is the time from exit to exit?

It's the vCPU thread time slice (e.g. 100ms).


>
> This might be rather short in some cases if the workload does a lot of exits
> (which I would expect PMU workloads to do) Would be better to use some
> explicit time check, or at least N exits.

Did you mean further increasing the lazy time to multiple host thread
scheduling time slices?
What would be a good value for "N"?


>> Upon the first access to one of the lbr related MSRs (since the vCPU was
>> scheduled in):
>> - record that the guest has used the lbr;
>> - create a host perf event to help save/restore the guest lbr stack if
>>    the guest uses the user callstack mode lbr stack;
> This is a bit risky. It would be safer (but also more expensive)
> to always safe even for any guest LBR use independent of callstack.
>
> Otherwise we might get into a situation where
> a vCPU context switch inside the guest PMI will clear the LBRs
> before they can be read in the PMI, so some LBR samples will be fully
> or partially cleared. This would be user visible.
>
> In theory could try to detect if the guest is inside a PMI and
> save/restore then, but that would likely be complicated. I would
> save/restore for all cases.

Yes, it is easier to save for all the cases. But curious for the 
non-callstack
mode, it's just ponit sampling functions (kind of speculative in some 
degree).
Would rarely losing a few recordings important in that case?


>
>> +static void
>> +__always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap, u32 msr,
>> +					  int type, bool value);
> __always_inline should only be used if it's needed for functionality,
> or in a header.

Thanks, will fix it.

Best,
Wei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ