lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <895e41d7-b64c-e398-c4e2-6309c747068d@intel.com>
Date:   Tue, 29 Jun 2021 10:58:05 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     "Liu, Jing2" <jing2.liu@...ux.intel.com>,
        Sean Christopherson <seanjc@...gle.com>
Cc:     pbonzini@...hat.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, jing2.liu@...el.com
Subject: Re: [PATCH RFC 2/7] kvm: x86: Introduce XFD MSRs as passthrough to
 guest


On 6/27/21 7:00 PM, Liu, Jing2 wrote:
> On 6/24/2021 1:50 AM, Dave Hansen wrote:
>> On 5/24/21 2:43 PM, Sean Christopherson wrote:
>>> On Sun, Feb 07, 2021, Jing Liu wrote:
>>>> Passthrough both MSRs to let guest access and write without vmexit.
>>> Why?  Except for read-only MSRs, e.g. MSR_CORE_C1_RES,
>>> passthrough MSRs are costly to support because KVM must context
>>> switch the MSR (which, by the by, is completely missing from the
>>> patch).
>>>
>>> In other words, if these MSRs are full RW passthrough, guests
>>> with XFD enabled will need to load the guest value on entry, save
>>> the guest value on exit, and load the host value on exit.  That's
>>> in the neighborhood of a 40% increase in latency for a single
>>> VM-Enter/VM-Exit roundtrip (~1500 cycles =>
>>>> 2000 cycles).
>> I'm not taking a position as to whether these _should_ be passthrough or
>> not.  But, if they are, I don't think you strictly need to do the
>> RDMSR/WRMSR at VM-Exit time.
> Hi Dave,
> 
> Thanks for reviewing the patches.
> 
> When vmexit, clearing XFD (because KVM thinks guest has requested AMX) can
> be deferred to the time when host does XSAVES, but this means need a new
> flag in common "fpu" structure or a common macro per thread which works
> only dedicated for KVM case, and check the flag in 1) switch_fpu_prepare()
> 2) kernel_fpu_begin() . This is the concern to me.

Why is this a concern?  You're worried about finding a single bit worth
of space somewhere?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ