[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <37dcc0bb-b624-4ea2-976a-51f5bfbd81a6@redhat.com>
Date: Mon, 11 Nov 2019 15:08:16 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: linmiaohe <linmiaohe@...wei.com>, rkrcmar@...hat.com,
sean.j.christopherson@...el.com, vkuznets@...hat.com,
wanpengli@...cent.com, jmattson@...gle.com, joro@...tes.org,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH] KVM: X86: avoid unused setup_syscalls_segments call when
SYSCALL check failed
On 09/11/19 09:58, linmiaohe wrote:
> From: Miaohe Lin <linmiaohe@...wei.com>
>
> When SYSCALL/SYSENTER ability check failed, cs and ss is inited but
> remain not used. Delay initializing cs and ss until SYSCALL/SYSENTER
> ability check passed.
>
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> ---
> arch/x86/kvm/emulate.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index 698efb8c3897..952d1a4f4d7e 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -2770,11 +2770,10 @@ static int em_syscall(struct x86_emulate_ctxt *ctxt)
> return emulate_ud(ctxt);
>
> ops->get_msr(ctxt, MSR_EFER, &efer);
> - setup_syscalls_segments(ctxt, &cs, &ss);
> -
> if (!(efer & EFER_SCE))
> return emulate_ud(ctxt);
>
> + setup_syscalls_segments(ctxt, &cs, &ss);
> ops->get_msr(ctxt, MSR_STAR, &msr_data);
> msr_data >>= 32;
> cs_sel = (u16)(msr_data & 0xfffc);
> @@ -2838,12 +2837,11 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
> if (ctxt->mode == X86EMUL_MODE_PROT64)
> return X86EMUL_UNHANDLEABLE;
>
> - setup_syscalls_segments(ctxt, &cs, &ss);
> -
> ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data);
> if ((msr_data & 0xfffc) == 0x0)
> return emulate_gp(ctxt, 0);
>
> + setup_syscalls_segments(ctxt, &cs, &ss);
> ctxt->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF);
> cs_sel = (u16)msr_data & ~SEGMENT_RPL_MASK;
> ss_sel = cs_sel + 8;
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists