[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5864cb4-cce8-bd32-04b0-ecb60c058d0b@redhat.com>
Date: Fri, 29 Apr 2022 16:50:02 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Maxim Levitsky <mlevitsk@...hat.com>,
Ben Gardon <bgardon@...gle.com>,
David Matlack <dmatlack@...gle.com>
Subject: Re: [PATCH] KVM: x86/mmu: Do not create SPTEs for GFNs that exceed
host.MAXPHYADDR
On 4/29/22 16:42, Sean Christopherson wrote:
> On Fri, Apr 29, 2022, Paolo Bonzini wrote:
>> On 4/29/22 16:24, Sean Christopherson wrote:
>>> I don't love the divergent memslot behavior, but it's technically correct, so I
>>> can't really argue. Do we want to "officially" document the memslot behavior?
>>>
>>
>> I don't know what you mean by officially document,
>
> Something in kvm/api.rst under KVM_SET_USER_MEMORY_REGION.
Not sure if the API documentation is the best place because userspace
does not know whether shadow paging is on (except indirectly through
other capabilities, perhaps)?
It could even be programmatic, such as returning 52 for
CPUID[0x80000008]. A nested KVM on L1 would not be able to use the
#PF(RSVD) trick to detect MMIO faults. That's not a big price to pay,
however I'm not sure it's a good idea in general...
Paolo
>
>> but at least I have relied on it to test KVM's MAXPHYADDR=52 cases before
>> such hardware existed. :)
>
> Ah, that's a very good reason to support this for shadow paging. Maybe throw
> something about testing in the changelog? Without considering the testing angle,
> it looks like KVM supports max=52 for !TDP just because it can, because practically
> speaking there's unlikely to be a use case for exposing that much memory to a
> guest when using shadow paging.
>
Powered by blists - more mailing lists