[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6a5b78f8-0fbe-fbec-8313-f7759e2483b0@redhat.com>
Date: Wed, 30 Sep 2020 08:26:28 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Ben Gardon <bgardon@...gle.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Cannon Matthews <cannonmatthews@...gle.com>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
Peter Feiner <pfeiner@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 04/22] kvm: mmu: Allocate and free TDP MMU roots
On 30/09/20 08:06, Sean Christopherson wrote:
>> +static struct kvm_mmu_page *alloc_tdp_mmu_root(struct kvm_vcpu *vcpu,
>> + union kvm_mmu_page_role role)
>> +{
>> + struct kvm_mmu_page *new_root;
>> + struct kvm_mmu_page *root;
>> +
>> + new_root = kvm_mmu_memory_cache_alloc(
>> + &vcpu->arch.mmu_page_header_cache);
>> + new_root->spt = kvm_mmu_memory_cache_alloc(
>> + &vcpu->arch.mmu_shadow_page_cache);
>> + set_page_private(virt_to_page(new_root->spt), (unsigned long)new_root);
>> +
>> + new_root->role.word = role.word;
>> + new_root->root_count = 1;
>> + new_root->gfn = 0;
>> + new_root->tdp_mmu_page = true;
>> +
>> + spin_lock(&vcpu->kvm->mmu_lock);
>> +
>> + /* Check that no matching root exists before adding this one. */
>> + root = find_tdp_mmu_root_with_role(vcpu->kvm, role);
>> + if (root) {
>> + get_tdp_mmu_root(vcpu->kvm, root);
>> + spin_unlock(&vcpu->kvm->mmu_lock);
> Hrm, I'm not a big fan of dropping locks in the middle of functions, but the
> alternatives aren't great. :-/ Best I can come up with is
>
> if (root)
> get_tdp_mmu_root()
> else
> list_add();
>
> spin_unlock();
>
> if (root) {
> free_page()
> kmem_cache_free()
> } else {
> root = new_root;
> }
>
> return root;
>
> Not sure that's any better.
>
>> + free_page((unsigned long)new_root->spt);
>> + kmem_cache_free(mmu_page_header_cache, new_root);
>> + return root;
>> + }
>> +
>> + list_add(&new_root->link, &vcpu->kvm->arch.tdp_mmu_roots);
>> + spin_unlock(&vcpu->kvm->mmu_lock);
>> +
>> + return new_root;
>> +}
>> +
>> +static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm_mmu_page *root;
>> + union kvm_mmu_page_role role;
>> +
>> + role = vcpu->arch.mmu->mmu_role.base;
>> + role.level = vcpu->arch.mmu->shadow_root_level;
>> + role.direct = true;
>> + role.gpte_is_8_bytes = true;
>> + role.access = ACC_ALL;
>> +
>> + spin_lock(&vcpu->kvm->mmu_lock);
>> +
>> + /* Search for an already allocated root with the same role. */
>> + root = find_tdp_mmu_root_with_role(vcpu->kvm, role);
>> + if (root) {
>> + get_tdp_mmu_root(vcpu->kvm, root);
>> + spin_unlock(&vcpu->kvm->mmu_lock);
> Rather than manually unlock and return, this can be
>
> if (root)
> get_tdp_mmju_root();
>
> spin_unlock()
>
> if (!root)
> root = alloc_tdp_mmu_root();
>
> return root;
>
> You could also add a helper to do the "get" along with the "find". Not sure
> if that's worth the code.
All in all I don't think it's any clearer than Ben's code. At least in
his case the "if"s clearly point at the double-checked locking pattern.
Paolo
Powered by blists - more mailing lists