lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 13 May 2024 13:38:39 -0700
From: Isaku Yamahata <isaku.yamahata@...el.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, seanjc@...gle.com, michael.roth@....com,
	isaku.yamahata@...el.com, thomas.lendacky@....com,
	isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH 02/21] KVM: Allow page-sized MMU caches to be initialized
 with custom 64-bit values

On Tue, Mar 26, 2024 at 11:56:35PM +0800,
Binbin Wu <binbin.wu@...ux.intel.com> wrote:

> On 3/5/2024 2:55 PM, Binbin Wu wrote:
> > 
> > 
> > On 2/28/2024 7:20 AM, Paolo Bonzini wrote:
> > > From: Sean Christopherson <seanjc@...gle.com>
> > > 
> > > Add support to MMU caches for initializing a page with a custom 64-bit
> > > value, e.g. to pre-fill an entire page table with non-zero PTE values.
> > > The functionality will be used by x86 to support Intel's TDX, which
> > > needs
> > > to set bit 63 in all non-present PTEs in order to prevent !PRESENT page
> > > faults from getting reflected into the guest (Intel's EPT Violation #VE
> > > architecture made the less than brilliant decision of having the per-PTE
> > > behavior be opt-out instead of opt-in).
> > > 
> > > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > > Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> > > Message-Id: <5919f685f109a1b0ebc6bd8fc4536ee94bcc172d.1705965635.git.isaku.yamahata@...el.com>
> > > Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> > > ---
> > >   include/linux/kvm_types.h |  1 +
> > >   virt/kvm/kvm_main.c       | 16 ++++++++++++++--
> > >   2 files changed, 15 insertions(+), 2 deletions(-)
> > 
> > Reviewed-by: Binbin Wu <binbin.wu@...ux.intel.com>
> > 
> > > 
> > > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > > index d93f6522b2c3..827ecc0b7e10 100644
> > > --- a/include/linux/kvm_types.h
> > > +++ b/include/linux/kvm_types.h
> > > @@ -86,6 +86,7 @@ struct gfn_to_pfn_cache {
> > >   struct kvm_mmu_memory_cache {
> > >       gfp_t gfp_zero;
> > >       gfp_t gfp_custom;
> > > +    u64 init_value;
> > >       struct kmem_cache *kmem_cache;
> > >       int capacity;
> > >       int nobjs;
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index 9c99c9373a3e..c9828feb7a1c 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -401,12 +401,17 @@ static void kvm_flush_shadow_all(struct kvm *kvm)
> > >   static inline void *mmu_memory_cache_alloc_obj(struct
> > > kvm_mmu_memory_cache *mc,
> > >                              gfp_t gfp_flags)
> > >   {
> > > +    void *page;
> > > +
> > >       gfp_flags |= mc->gfp_zero;
> > >         if (mc->kmem_cache)
> > >           return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
> > > -    else
> > > -        return (void *)__get_free_page(gfp_flags);
> > > +
> > > +    page = (void *)__get_free_page(gfp_flags);
> > > +    if (page && mc->init_value)
> > > +        memset64(page, mc->init_value, PAGE_SIZE /
> > > sizeof(mc->init_value));
> 
> Do we need a static_assert() to make sure mc->init_value is 64bit?

That's overkill because EPT entry is defined as 64bit and KVM uses u64 for it
uniformly.
-- 
Isaku Yamahata <isaku.yamahata@...el.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ