[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86d3c3a5d61649079800a2038370365b@intel.com>
Date: Mon, 13 Dec 2021 08:23:16 +0000
From: "Wang, Wei W" <wei.w.wang@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
"Zhong, Yang" <yang.zhong@...el.com>,
"x86@...nel.org" <x86@...nel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>
CC: "seanjc@...gle.com" <seanjc@...gle.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
"Tian, Kevin" <kevin.tian@...el.com>,
"jing2.liu@...ux.intel.com" <jing2.liu@...ux.intel.com>,
"Liu, Jing2" <jing2.liu@...el.com>,
"Zeng, Guang" <guang.zeng@...el.com>
Subject: RE: [PATCH 16/19] kvm: x86: Introduce KVM_{G|S}ET_XSAVE2 ioctl
On Saturday, December 11, 2021 6:13 AM, Paolo Bonzini wrote:
>
> By the way, I think KVM_SET_XSAVE2 is not needed. Instead:
>
> - KVM_CHECK_EXTENSION(KVM_CAP_XSAVE2) should return the size of the
> buffer that is passed to KVM_GET_XSAVE2
>
> - KVM_GET_XSAVE2 should fill in the buffer expecting that its size is
> whatever KVM_CHECK_EXTENSION(KVM_CAP_XSAVE2) passes
>
> - KVM_SET_XSAVE can just expect a buffer that is bigger than 4k if the
> save states recorded in the header point to offsets larger than 4k.
I think one issue is that KVM_SET_XSAVE works with "struct kvm_xsave" (hardcoded 4KB buffer),
including kvm_vcpu_ioctl_x86_set_xsave. The states obtained via KVM_GET_XSAVE2 will be made
using "struct kvm_xsave2".
Did you mean that we could add a new code path under KVM_SET_XSAVE to make it work with
the new "struct kvm_xsave2"?
e.g.:
(xsave2_enabled below is set when userspace calls to get KVM_CAP_XSAVE2)
if (kvm->xsave2_enabled) {
new implementation using "struct kvm_xsave2"
} else {
current implementation using "struct kvm_xsave"
}
(this seems like a new implementation which might deserve a new ioctl)
Thanks,
Wei
Powered by blists - more mailing lists