[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YtdHXjFhxPXCvhf5@google.com>
Date: Wed, 20 Jul 2022 00:07:58 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
David Matlack <dmatlack@...gle.com>,
Lai Jiangshan <jiangshan.ljs@...group.com>
Subject: Re: [PATCH V3 08/12] KVM: X86/MMU: Allocate mmu->pae_root for PAE
paging on-demand
On Tue, Jul 19, 2022, Sean Christopherson wrote:
> On Sat, May 21, 2022, Lai Jiangshan wrote:
> > + /*
> > + * Allocate a page to hold the four PDPTEs for PAE paging when emulating
> > + * 32-bit mode. CR3 is only 32 bits even on x86_64 in this case.
> > + * Therefore we need to allocate the PDP table in the first 4GB of
> > + * memory, which happens to fit the DMA32 zone.
> > + */
> > + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_DMA32);
>
> Leave off __GFP_ZERO, it's unnecesary in both cases, and actively misleading in
> when TDP is disabled. KVM _must_ write the page after making it decrypted. And
> since I can't find any code that actually does initialize "pae_root", I suspect
> this series is buggy.
>
> But if there is a bug, it was introduced earlier in this series, either by
>
> KVM: X86/MMU: Add local shadow pages
>
> or by
>
> KVM: X86/MMU: Activate local shadow pages and remove old logic
>
> depending on whether you want to blame the function that is buggy, or the patch
> that uses the buggy function..
>
> The right place to initialize the root is kvm_mmu_alloc_local_shadow_page().
> KVM sets __GFP_ZERO for mmu_shadow_page_cache, i.e. relies on new sp->spt pages
> to be zeroed prior to "allocating" from the cache.
>
> The PAE root backing page on the other hand is allocated once and then reused
> over and over.
>
> if (role.level == PT32E_ROOT_LEVEL &&
> !WARN_ON_ONCE(!vcpu->arch.mmu->pae_root)) {
> sp->spt = vcpu->arch.mmu->pae_root;
> kvm_mmu_initialize_pae_root(sp->spt): <==== something like this
> } else {
> sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> }
Ah, I believe this is handled for the non-SME case in mmu_free_local_root_page().
But that won't play nice with the decryption path. And either way, the PDPDTEs
should be explicitly initialized/zeroed when the shadow page is "allocated"
> > - for (i = 0; i < 4; ++i)
> > - mmu->pae_root[i] = INVALID_PAE_ROOT;
>
> Please remove this code in a separate patch. I don't care if it is removed before
> or after (I'm pretty sure the existing behavior is paranoia), but I don't want
> multiple potentially-functional changes in this patch.
Powered by blists - more mailing lists