[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNwBeN8mGr1sJJ6i@google.com>
Date: Tue, 15 Aug 2023 15:51:36 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Zeng Guang <guang.zeng@...el.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
H Peter Anvin <hpa@...or.com>, kvm@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org,
Binbin Wu <binbin.wu@...ux.intel.com>
Subject: Re: [PATCH v2 2/8] KVM: x86: Use a new flag for branch instructions
Branch *targets*, not branch instructions.
On Wed, Jul 19, 2023, Zeng Guang wrote:
> From: Binbin Wu <binbin.wu@...ux.intel.com>
>
> Use the new flag X86EMUL_F_BRANCH instead of X86EMUL_F_FETCH in
> assign_eip(), since strictly speaking it is not behavior of instruction
> fetch.
Eh, I'd just drop this paragraph, as evidenced by this code existing as-is for
years, we wouldn't introduce X86EMUL_F_BRANCH just because resolving a branch
target isn't strictly an instruction fetch.
> Another reason is to distinguish instruction fetch and execution of branch
> instruction for feature(s) that handle differently on them.
Similar to the shortlog, it's about computing the branch target, not executing a
branch instruction. That distinction matters, e.g. a Jcc that is not taken will
*not* follow the branch target, but the instruction is still *executed*. And there
exist instructions that compute branch targets, but aren't what most people would
typically consider a branch instruction, e.g. XBEGIN.
> Branch instruction is not data access instruction, so skip checking against
> execute-only code segment as instruction fetch.
Rather than call out individual use case, I would simply state that as of this
patch, X86EMUL_F_BRANCH and X86EMUL_F_FETCH are identical as far as KVM is
concernered. That let's the reader know that (a) there's no intended change in
behavior and (b) that the intent is to effectively split all consumption of
X86EMUL_F_FETCH into (X86EMUL_F_FETCH | X86EMUL_F_BRANCH).
> Signed-off-by: Binbin Wu <binbin.wu@...ux.intel.com>
> Signed-off-by: Zeng Guang <guang.zeng@...el.com>
> ---
> arch/x86/kvm/emulate.c | 5 +++--
> arch/x86/kvm/kvm_emulate.h | 1 +
> 2 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index 3ddfbc99fa4f..8e706d19ae45 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -721,7 +721,8 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt,
> (flags & X86EMUL_F_WRITE))
> goto bad;
> /* unreadable code segment */
> - if (!(flags & X86EMUL_F_FETCH) && (desc.type & 8) && !(desc.type & 2))
> + if (!(flags & (X86EMUL_F_FETCH | X86EMUL_F_BRANCH))
> + && (desc.type & 8) && !(desc.type & 2))
Put the && on the first line, and align indendation.
/* unreadable code segment */
if (!(flags & (X86EMUL_F_FETCH | X86EMUL_F_BRANCH)) &&
(desc.type & 8) && !(desc.type & 2))
goto bad;
Powered by blists - more mailing lists