[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXFOPP3P-HE6YbEZ@google.com>
Date: Wed, 21 Jan 2026 14:07:56 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Kevin Cheng <chengkev@...gle.com>
Cc: pbonzini@...hat.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
yosry.ahmed@...ux.dev
Subject: Re: [PATCH 1/3] KVM: SVM: Fix nested NPF injection to set PFERR_GUEST_{PAGE,FINAL}_MASK
On Wed, Jan 21, 2026, Kevin Cheng wrote:
> When KVM emulates an instruction for L2 and encounters a nested page
> fault (e.g., during string I/O emulation), nested_svm_inject_npf_exit()
> injects an NPF to L1. However, the code incorrectly hardcodes
> (1ULL << 32) for exit_info_1's upper bits when the original exit was
> not an NPF. This always sets PFERR_GUEST_FINAL_MASK even when the fault
> occurred on a page table page, preventing L1 from correctly identifying
> the cause of the fault.
>
> Set PFERR_GUEST_PAGE_MASK in the error code when a nested page fault
> occurs during a guest page table walk, and PFERR_GUEST_FINAL_MASK when
> the fault occurs on the final GPA-to-HPA translation.
>
> Widen error_code in struct x86_exception from u16 to u64 to accommodate
> the PFERR_GUEST_* bits (bits 32 and 33).
Please do this in a separate patch. Intel CPUs straight up don't support 32-bit
error codes, let alone 64-bit error codes, so this seemingly innocuous change
needs to be accompanied by a lengthy changelog that effectively audits all usage
to "prove" this change is ok.
> Update nested_svm_inject_npf_exit() to use fault->error_code directly
> instead of hardcoding the upper bits. Also add a WARN_ON_ONCE if neither
> PFERR_GUEST_FINAL_MASK nor PFERR_GUEST_PAGE_MASK is set, as this would
> indicate a bug in the page fault handling code.
>
> Signed-off-by: Kevin Cheng <chengkev@...gle.com>
> ---
> arch/x86/kvm/kvm_emulate.h | 2 +-
> arch/x86/kvm/mmu/paging_tmpl.h | 22 ++++++++++------------
> arch/x86/kvm/svm/nested.c | 11 +++++------
> 3 files changed, 16 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
> index fb3dab4b5a53e..ff4f9b0a01ff7 100644
> --- a/arch/x86/kvm/kvm_emulate.h
> +++ b/arch/x86/kvm/kvm_emulate.h
> @@ -22,7 +22,7 @@ enum x86_intercept_stage;
> struct x86_exception {
> u8 vector;
> bool error_code_valid;
> - u16 error_code;
> + u64 error_code;
> bool nested_page_fault;
> u64 address; /* cr2 or nested page fault gpa */
> u8 async_page_fault;
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 901cd2bd40b84..923179bfd5c74 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -379,18 +379,12 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
> real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(table_gfn),
> nested_access, &walker->fault);
>
> - /*
> - * FIXME: This can happen if emulation (for of an INS/OUTS
> - * instruction) triggers a nested page fault. The exit
> - * qualification / exit info field will incorrectly have
> - * "guest page access" as the nested page fault's cause,
> - * instead of "guest page structure access". To fix this,
> - * the x86_exception struct should be augmented with enough
> - * information to fix the exit_qualification or exit_info_1
> - * fields.
> - */
> - if (unlikely(real_gpa == INVALID_GPA))
> + if (unlikely(real_gpa == INVALID_GPA)) {
> +#if PTTYPE != PTTYPE_EPT
> + walker->fault.error_code |= PFERR_GUEST_PAGE_MASK;
> +#endif
Why exclude EPT? EPT doesn't use the error code _verbatim_, but EPT shares the
concept/reporting of intermediate vs. final walks.
> return 0;
> + }
>
> slot = kvm_vcpu_gfn_to_memslot(vcpu, gpa_to_gfn(real_gpa));
> if (!kvm_is_visible_memslot(slot))
> @@ -446,8 +440,12 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
> #endif
>
> real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walker->fault);
> - if (real_gpa == INVALID_GPA)
> + if (real_gpa == INVALID_GPA) {
> +#if PTTYPE != PTTYPE_EPT
> + walker->fault.error_code |= PFERR_GUEST_FINAL_MASK;
> +#endif
Same thing here.
> return 0;
> + }
>
> walker->gfn = real_gpa >> PAGE_SHIFT;
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index de90b104a0dd5..f8dfd5c333023 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -40,18 +40,17 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
> struct vmcb *vmcb = svm->vmcb;
>
> if (vmcb->control.exit_code != SVM_EXIT_NPF) {
> - /*
> - * TODO: track the cause of the nested page fault, and
> - * correctly fill in the high bits of exit_info_1.
> - */
> - vmcb->control.exit_code = SVM_EXIT_NPF;
> - vmcb->control.exit_info_1 = (1ULL << 32);
> + vmcb->control.exit_info_1 = fault->error_code;
> vmcb->control.exit_info_2 = fault->address;
> }
>
> + vmcb->control.exit_code = SVM_EXIT_NPF;
> vmcb->control.exit_info_1 &= ~0xffffffffULL;
> vmcb->control.exit_info_1 |= fault->error_code;
So... what happens when exit_info_1 already has PFERR_GUEST_PAGE_MASK, and then
@fault sets PFERR_GUEST_FINAL_MASK? Presumably that can't/shouldn't happen, but
nothing in the changelog explains why such a scenario is impossible, and nothing
in the code hardens KVM against such goofs.
> + WARN_ON_ONCE(!(vmcb->control.exit_info_1 &
> + (PFERR_GUEST_FINAL_MASK | PFERR_GUEST_PAGE_MASK)));
> +
> nested_svm_vmexit(svm);
> }
>
> --
> 2.52.0.457.g6b5491de43-goog
>
Powered by blists - more mailing lists