lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c63438b-2a42-0b81-f002-b937095570e1@linux.intel.com>
Date:   Mon, 28 Jun 2021 10:00:28 +0800
From:   "Liu, Jing2" <jing2.liu@...ux.intel.com>
To:     Dave Hansen <dave.hansen@...el.com>,
        Sean Christopherson <seanjc@...gle.com>
Cc:     pbonzini@...hat.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, jing2.liu@...el.com
Subject: Re: [PATCH RFC 2/7] kvm: x86: Introduce XFD MSRs as passthrough to
 guest



On 6/24/2021 1:50 AM, Dave Hansen wrote:
> On 5/24/21 2:43 PM, Sean Christopherson wrote:
>> On Sun, Feb 07, 2021, Jing Liu wrote:
>>> Passthrough both MSRs to let guest access and write without vmexit.
>> Why?  Except for read-only MSRs, e.g. MSR_CORE_C1_RES, passthrough MSRs are
>> costly to support because KVM must context switch the MSR (which, by the by, is
>> completely missing from the patch).
>>
>> In other words, if these MSRs are full RW passthrough, guests with XFD enabled
>> will need to load the guest value on entry, save the guest value on exit, and
>> load the host value on exit.  That's in the neighborhood of a 40% increase in
>> latency for a single VM-Enter/VM-Exit roundtrip (~1500 cycles => >2000 cycles).
> I'm not taking a position as to whether these _should_ be passthrough or
> not.  But, if they are, I don't think you strictly need to do the
> RDMSR/WRMSR at VM-Exit time.
Hi Dave,

Thanks for reviewing the patches.

When vmexit, clearing XFD (because KVM thinks guest has requested AMX) can
be deferred to the time when host does XSAVES, but this means need a new
flag in common "fpu" structure or a common macro per thread which works
only dedicated for KVM case, and check the flag in 1) switch_fpu_prepare()
2) kernel_fpu_begin() . This is the concern to me.

Thanks,
Jing
> Just like the "FPU", XFD isn't be used in normal kernel code.  This is
> why we can be lazy about FPU state with TIF_NEED_FPU_LOAD.  I _suspect_
> that some XFD manipulation can be at least deferred to the same place
> where the FPU state is manipulated: places like switch_fpu_return() or
> kernel_fpu_begin().
>
> Doing that would at least help the fast VM-Exit/VM-Enter paths that
> really like TIF_NEED_FPU_LOAD today.
>
> I guess the nasty part is that you actually need to stash the old XFD
> MSR value in the vcpu structure and that's not available at
> context-switch time.  So, maybe this would only allow deferring the
> WRMSR.  That's better than nothing I guess.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ