[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <997f9fe3-847b-8216-c629-1ad5fdd2ffae@redhat.com>
Date: Wed, 28 Apr 2021 08:25:20 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ben Gardon <bgardon@...gle.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Peter Shier <pshier@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex
On 28/04/21 00:36, Ben Gardon wrote:
> +void kvm_arch_assign_memslots(struct kvm *kvm, int as_id,
> + struct kvm_memslots *slots)
> +{
> + mutex_lock(&kvm->arch.memslot_assignment_lock);
> + rcu_assign_pointer(kvm->memslots[as_id], slots);
> + mutex_unlock(&kvm->arch.memslot_assignment_lock);
> +}
Does the assignment also needs the lock, or only the rmap allocation? I
would prefer the hook to be something like kvm_arch_setup_new_memslots.
(Also it is useful to have a comment somewhere explaining why the
slots_lock does not work. IIUC there would be a deadlock because you'd
be taking the slots_lock inside an SRCU critical region, while usually
the slots_lock critical section is the one that includes a
synchronize_srcu; I should dig that up and document that ordering in
Documentation/virt/kvm too).
Paolo
Powered by blists - more mailing lists