lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrYHc4BIAf+pGRhW@google.com>
Date:   Fri, 24 Jun 2022 18:50:27 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     David Matlack <dmatlack@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] KVM: x86/mmu: Avoid subtle pointer arithmetic in
 kvm_mmu_child_role()

On Fri, Jun 24, 2022, David Matlack wrote:
> On Fri, Jun 24, 2022 at 05:18:06PM +0000, Sean Christopherson wrote:
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -2168,7 +2168,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
> >  	return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> >  }
> >  
> > -static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsigned int access)
> > +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,
> > +						  unsigned int access)
> >  {
> >  	struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep);
> >  	union kvm_mmu_page_role role;
> > @@ -2195,13 +2196,19 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsig
> >  	 * uses 2 PAE page tables, each mapping a 2MiB region. For these,
> >  	 * @role.quadrant encodes which half of the region they map.
> >  	 *
> > -	 * Note, the 4 PAE page directories are pre-allocated and the quadrant
> > -	 * assigned in mmu_alloc_root(). So only page tables need to be handled
> > -	 * here.
> > +	 * Concretely, a 4-byte PDE consumes bits 31:22, while an 8-byte PDE
> > +	 * consumes bits 29:21.  To consume bits 31:30, KVM's uses 4 shadow
> > +	 * PDPTEs; those 4 PAE page directories are pre-allocated and their
> > +	 * quadrant is assigned in mmu_alloc_root().   A 4-byte PTE consumes
> > +	 * bits 21:12, while an 8-byte PTE consumes bits 20:12.  To consume
> > +	 * bit 21 in the PTE (the child here), KVM propagates that bit to the
> > +	 * quadrant, i.e. sets quadrant to '0' or '1'.  The parent 8-byte PDE
> > +	 * covers bit 21 (see above), thus the quadrant is calculated from the
> > +	 * _least_ significant bit of the PDE index.
> >  	 */
> >  	if (role.has_4_byte_gpte) {
> >  		WARN_ON_ONCE(role.level != PG_LEVEL_4K);
> > -		role.quadrant = (sptep - parent_sp->spt) % 2;
> > +		role.quadrant = ((unsigned long)sptep / sizeof(*sptep)) & 1;
> >  	}
> 
> I find both difficult to read TBH.

No argument there.  My objection to the pointer arithmetic is that it's easy to
misread.

> And "sptep -> sp->spt" is repeated in other places.

> 
> How about using this oppotunity to introduce a helper that turns an
> sptep into an index to use here and clean up the other users?
> 
> e.g.
> 
> static inline int spte_index(u64 *sptep)
> {
>         return ((unsigned long)sptep / sizeof(*sptep)) & (SPTE_ENT_PER_PAGE - 1);
> }
> 
> Then kvm_mmu_child_role() becomes:
> 
>         if (role.has_4_byte_gpte) {
>         	WARN_ON_ONCE(role.level != PG_LEVEL_4K);
>         	role.quadrant = spte_index(sptep) & 1;
>         }

Nice!  I like this a lot.  Will do in v2.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ