[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240403183420.GI2444378@ls.amr.corp.intel.com>
Date: Wed, 3 Apr 2024 11:34:20 -0700
From: Isaku Yamahata <isaku.yamahata@...el.com>
To: Chao Gao <chao.gao@...el.com>
Cc: isaku.yamahata@...el.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, isaku.yamahata@...il.com,
Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
Sean Christopherson <seanjc@...gle.com>,
Sagi Shahar <sagis@...gle.com>, Kai Huang <kai.huang@...el.com>,
chen.bo@...el.com, hang.yuan@...el.com, tina.zhang@...el.com,
Sean Christopherson <sean.j.christopherson@...el.com>,
isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH v19 097/130] KVM: x86: Split core of hypercall emulation
to helper function
On Fri, Mar 29, 2024 at 11:24:55AM +0800,
Chao Gao <chao.gao@...el.com> wrote:
> On Mon, Feb 26, 2024 at 12:26:39AM -0800, isaku.yamahata@...el.com wrote:
> >@@ -10162,18 +10151,49 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> >
> > WARN_ON_ONCE(vcpu->run->hypercall.flags & KVM_EXIT_HYPERCALL_MBZ);
> > vcpu->arch.complete_userspace_io = complete_hypercall_exit;
> >+ /* stat is incremented on completion. */
>
> Perhaps we could use a distinct return value to signal that the request is redirected
> to userspace. This way, more cases can be supported, e.g., accesses to MTRR
> MSRs, requests to service TDs, etc. And then ...
The convention here is the one for exit_handler vcpu_enter_guest() already uses.
If we introduce something like KVM_VCPU_CONTINUE=1, KVM_VCPU_EXIT_TO_USER=0, it
will touch many places. So if we will (I'm not sure it's worthwhile), the
cleanup should be done as independently.
> > return 0;
> > }
> > default:
> > ret = -KVM_ENOSYS;
> > break;
> > }
> >+
> > out:
> >+ ++vcpu->stat.hypercalls;
> >+ return ret;
> >+}
> >+EXPORT_SYMBOL_GPL(__kvm_emulate_hypercall);
> >+
> >+int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> >+{
> >+ unsigned long nr, a0, a1, a2, a3, ret;
> >+ int op_64_bit;
> >+ int cpl;
> >+
> >+ if (kvm_xen_hypercall_enabled(vcpu->kvm))
> >+ return kvm_xen_hypercall(vcpu);
> >+
> >+ if (kvm_hv_hypercall_enabled(vcpu))
> >+ return kvm_hv_hypercall(vcpu);
> >+
> >+ nr = kvm_rax_read(vcpu);
> >+ a0 = kvm_rbx_read(vcpu);
> >+ a1 = kvm_rcx_read(vcpu);
> >+ a2 = kvm_rdx_read(vcpu);
> >+ a3 = kvm_rsi_read(vcpu);
> >+ op_64_bit = is_64_bit_hypercall(vcpu);
> >+ cpl = static_call(kvm_x86_get_cpl)(vcpu);
> >+
> >+ ret = __kvm_emulate_hypercall(vcpu, nr, a0, a1, a2, a3, op_64_bit, cpl);
> >+ if (nr == KVM_HC_MAP_GPA_RANGE && !ret)
> >+ /* MAP_GPA tosses the request to the user space. */
>
> no need to check what the request is. Just checking the return value will suffice.
This is needed to avoid updating rax etc. KVM_HC_MAP_GPA_RANGE is only an
exception to go to the user space. This check is a bit weird, but I couldn't
find a good way.
>
> >+ return 0;
> >+
> > if (!op_64_bit)
> > ret = (u32)ret;
> > kvm_rax_write(vcpu, ret);
> >
> >- ++vcpu->stat.hypercalls;
> > return kvm_skip_emulated_instruction(vcpu);
> > }
> > EXPORT_SYMBOL_GPL(kvm_emulate_hypercall);
> >--
> >2.25.1
> >
> >
>
--
Isaku Yamahata <isaku.yamahata@...el.com>
Powered by blists - more mailing lists