[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8e11fd2e-d77b-46cc-94c9-e542003c4080@linux.intel.com>
Date: Wed, 21 May 2025 11:30:42 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: pbonzini@...hat.com, seanjc@...gle.com, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, x86@...nel.org, rick.p.edgecombe@...el.com,
dave.hansen@...el.com, kirill.shutemov@...el.com, tabba@...gle.com,
ackerleytng@...gle.com, quic_eberman@...cinc.com, michael.roth@....com,
david@...hat.com, vannapurve@...gle.com, vbabka@...e.cz, jroedel@...e.de,
thomas.lendacky@....com, pgonda@...gle.com, zhiquan1.li@...el.com,
fan.du@...el.com, jun.miao@...el.com, ira.weiny@...el.com,
isaku.yamahata@...el.com, xiaoyao.li@...el.com, chao.p.peng@...el.com
Subject: Re: [RFC PATCH 20/21] KVM: x86: Force a prefetch fault's max mapping
level to 4KB for TDX
On 4/24/2025 11:09 AM, Yan Zhao wrote:
> Introduce a "prefetch" parameter to the private_max_mapping_level hook and
> enforce the max mapping level of a prefetch fault for private memory to be
> 4KB. This is a preparation to enable the ignoring huge page splitting in
> the fault path.
>
> If a prefetch fault results in a 2MB huge leaf in the mirror page table,
> there may not be a vCPU available to accept the corresponding 2MB huge leaf
> in the S-EPT if the TD is not configured to receive #VE for page
> acceptance. Consequently, if a vCPU accepts the page at 4KB level, it will
> trigger an EPT violation to split the 2MB huge leaf generated by the
> prefetch fault.
>
> Since handling the BUSY error from SEAMCALLs for huge page splitting is
> more comprehensive in the fault path, which is with kvm->mmu_lock held for
> reading, force the max mapping level of a prefetch fault of private memory
> to be 4KB to prevent potential splitting.
>
> Since prefetch faults for private memory are uncommon after the TD's build
> time, enforcing a 4KB mapping level is unlikely to cause any performance
> degradation.
I am wondering what are the use cases for KVM_PRE_FAULT_MEMORY.
Is there an API usage guide to limit that userspace shouldn't use it for a large
amount of memory pre-fault? If no, and userspace uses it to pre-fault a lot of
memory, this "unlikely to cause any performance degradation" might be not true.
Powered by blists - more mailing lists