lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200930153824.GA32672@linux.intel.com>
Date:   Wed, 30 Sep 2020 08:38:31 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Ben Gardon <bgardon@...gle.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, Cannon Matthews <cannonmatthews@...gle.com>,
        Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
        Peter Feiner <pfeiner@...gle.com>,
        Junaid Shahid <junaids@...gle.com>,
        Jim Mattson <jmattson@...gle.com>,
        Yulei Zhang <yulei.kernel@...il.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 04/22] kvm: mmu: Allocate and free TDP MMU roots

On Wed, Sep 30, 2020 at 08:26:28AM +0200, Paolo Bonzini wrote:
> On 30/09/20 08:06, Sean Christopherson wrote:
> >> +static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu)
> >> +{
> >> +	struct kvm_mmu_page *root;
> >> +	union kvm_mmu_page_role role;
> >> +
> >> +	role = vcpu->arch.mmu->mmu_role.base;
> >> +	role.level = vcpu->arch.mmu->shadow_root_level;
> >> +	role.direct = true;
> >> +	role.gpte_is_8_bytes = true;
> >> +	role.access = ACC_ALL;
> >> +
> >> +	spin_lock(&vcpu->kvm->mmu_lock);
> >> +
> >> +	/* Search for an already allocated root with the same role. */
> >> +	root = find_tdp_mmu_root_with_role(vcpu->kvm, role);
> >> +	if (root) {
> >> +		get_tdp_mmu_root(vcpu->kvm, root);
> >> +		spin_unlock(&vcpu->kvm->mmu_lock);
> > Rather than manually unlock and return, this can be
> > 
> > 	if (root)
> > 		get_tdp_mmju_root();
> > 
> > 	spin_unlock()
> > 
> > 	if (!root)
> > 		root = alloc_tdp_mmu_root();
> > 
> > 	return root;
> > 
> > You could also add a helper to do the "get" along with the "find".  Not sure
> > if that's worth the code.
> 
> All in all I don't think it's any clearer than Ben's code.  At least in
> his case the "if"s clearly point at the double-checked locking pattern.

Actually, why is this even dropping the lock to do the alloc?  The allocs are
coming from the caches, which are designed to be invoked while holding the
spin lock.

Also relevant is that, other than this code, the only user of
find_tdp_mmu_root_with_role() is kvm_tdp_mmu_root_hpa_for_role(), and that
helper is itself unused.  I.e. the "find" can be open coded.

Putting those two together yields this, which IMO is much cleaner.

static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu)
{
        union kvm_mmu_page_role role;
	struct kvm *kvm = vcpu->kvm;
        struct kvm_mmu_page *root;

	role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level);

        spin_lock(&kvm->mmu_lock);

        /* Check for an existing root before allocating a new one. */
        for_each_tdp_mmu_root(kvm, root) {
                if (root->role.word == role.word) {
                        get_tdp_mmu_root(root);
                        spin_unlock(&kvm->mmu_lock);
                        return root;
                }
        }

        root = alloc_tdp_mmu_page(vcpu, 0, vcpu->arch.mmu->shadow_root_level);
        root->root_count = 1;

        list_add(&root->link, &kvm->arch.tdp_mmu_roots);

        spin_unlock(&kvm->mmu_lock);

        return root;
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ