[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <o7m5gh76crbgzlfvq4lbp6ymuzbgze25qphlhsezl2ox5rfjuv@3xh7gqh5dmlt>
Date: Mon, 20 Oct 2025 17:21:07 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Jim Mattson <jmattson@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Shuah Khan <shuah@...nel.org>,
Sean Christopherson <seanjc@...gle.com>, Bibo Mao <maobibo@...ngson.cn>,
Huacai Chen <chenhuacai@...nel.org>, Andrew Jones <ajones@...tanamicro.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>, "Pratik R. Sampat" <prsampat@....com>,
Kai Huang <kai.huang@...el.com>, Eric Auger <eric.auger@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 2/4] KVM: selftests: Use a loop to walk guest page tables
On Wed, Sep 17, 2025 at 02:48:38PM -0700, Jim Mattson wrote:
> Walk the guest page tables via a loop when searching for a PTE,
> instead of using unique variables for each level of the page tables.
>
> This simplifies the code and makes it easier to support 5-level paging
> in the future.
>
> Signed-off-by: Jim Mattson <jmattson@...gle.com>
> ---
> .../testing/selftests/kvm/lib/x86/processor.c | 21 +++++++------------
> 1 file changed, 8 insertions(+), 13 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
> index 0238e674709d..433365c8196d 100644
> --- a/tools/testing/selftests/kvm/lib/x86/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86/processor.c
> @@ -270,7 +270,8 @@ static bool vm_is_target_pte(uint64_t *pte, int *level, int current_level)
> uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
> int *level)
> {
> - uint64_t *pml4e, *pdpe, *pde;
> + uint64_t *pte = &vm->pgd;
> + int current_level;
>
> TEST_ASSERT(!vm->arch.is_pt_protected,
> "Walking page tables of protected guests is impossible");
> @@ -291,19 +292,13 @@ uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
> TEST_ASSERT(vaddr == (((int64_t)vaddr << 16) >> 16),
> "Canonical check failed. The virtual address is invalid.");
>
> - pml4e = virt_get_pte(vm, &vm->pgd, vaddr, PG_LEVEL_512G);
> - if (vm_is_target_pte(pml4e, level, PG_LEVEL_512G))
> - return pml4e;
> -
> - pdpe = virt_get_pte(vm, pml4e, vaddr, PG_LEVEL_1G);
> - if (vm_is_target_pte(pdpe, level, PG_LEVEL_1G))
> - return pdpe;
> -
> - pde = virt_get_pte(vm, pdpe, vaddr, PG_LEVEL_2M);
> - if (vm_is_target_pte(pde, level, PG_LEVEL_2M))
> - return pde;
> + for (current_level = vm->pgtable_levels; current_level > 0; current_level--) {
This should be current_level >= PG_LEVEL_4K. It's the same, but easier
to read.
> + pte = virt_get_pte(vm, pte, vaddr, current_level);
> + if (vm_is_target_pte(pte, level, current_level))
Seems like vm_is_target_pte() is written with the assumption that it
operates on an upper-level PTE, but I think it works on 4K PTEs as well.
> + return pte;
> + }
>
> - return virt_get_pte(vm, pde, vaddr, PG_LEVEL_4K);
> + return pte;
> }
>
> uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr)
> --
> 2.51.0.470.ga7dc726c21-goog
>
Powered by blists - more mailing lists