[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a3d0b8dd-4831-4cf3-839e-ef40bdcea234@linux.intel.com>
Date: Wed, 27 Mar 2024 08:47:54 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Isaku Yamahata <isaku.yamahata@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
isaku.yamahata@...il.com, Paolo Bonzini <pbonzini@...hat.com>,
erdemaktas@...gle.com, Sean Christopherson <seanjc@...gle.com>,
Sagi Shahar <sagis@...gle.com>, Kai Huang <kai.huang@...el.com>,
chen.bo@...el.com, hang.yuan@...el.com, tina.zhang@...el.com,
isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH v19 048/130] KVM: Allow page-sized MMU caches to be
initialized with custom 64-bit values
On 3/27/2024 1:34 AM, Isaku Yamahata wrote:
> On Tue, Mar 26, 2024 at 11:53:02PM +0800,
> Binbin Wu <binbin.wu@...ux.intel.com> wrote:
>
>>
>> On 2/26/2024 4:25 PM, isaku.yamahata@...el.com wrote:
>>> From: Sean Christopherson <seanjc@...gle.com>
>>>
>>> Add support to MMU caches for initializing a page with a custom 64-bit
>>> value, e.g. to pre-fill an entire page table with non-zero PTE values.
>>> The functionality will be used by x86 to support Intel's TDX, which needs
>>> to set bit 63 in all non-present PTEs in order to prevent !PRESENT page
>>> faults from getting reflected into the guest (Intel's EPT Violation #VE
>>> architecture made the less than brilliant decision of having the per-PTE
>>> behavior be opt-out instead of opt-in).
>>>
>>> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
>>> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
>>> ---
>>> include/linux/kvm_types.h | 1 +
>>> virt/kvm/kvm_main.c | 16 ++++++++++++++--
>>> 2 files changed, 15 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
>>> index 9d1f7835d8c1..60c8d5c9eab9 100644
>>> --- a/include/linux/kvm_types.h
>>> +++ b/include/linux/kvm_types.h
>>> @@ -94,6 +94,7 @@ struct gfn_to_pfn_cache {
>>> struct kvm_mmu_memory_cache {
>>> gfp_t gfp_zero;
>>> gfp_t gfp_custom;
>>> + u64 init_value;
>>> struct kmem_cache *kmem_cache;
>>> int capacity;
>>> int nobjs;
>>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>>> index de38f308738e..d399009ef1d7 100644
>>> --- a/virt/kvm/kvm_main.c
>>> +++ b/virt/kvm/kvm_main.c
>>> @@ -401,12 +401,17 @@ static void kvm_flush_shadow_all(struct kvm *kvm)
>>> static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
>>> gfp_t gfp_flags)
>>> {
>>> + void *page;
>>> +
>>> gfp_flags |= mc->gfp_zero;
>>> if (mc->kmem_cache)
>>> return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
>>> - else
>>> - return (void *)__get_free_page(gfp_flags);
>>> +
>>> + page = (void *)__get_free_page(gfp_flags);
>>> + if (page && mc->init_value)
>>> + memset64(page, mc->init_value, PAGE_SIZE / sizeof(mc->init_value));
>> Do we need a static_assert() to make sure mc->init_value is 64bit?
> I don't see much value. Is your concern sizeof() part?
> If so, we can replace it with 8.
>
> memset64(page, mc->init_value, PAGE_SIZE / 8);
Yes, but it's trivial. So, up to you. :)
Powered by blists - more mailing lists