[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMkAt6q6YLBfo2RceduSXTafckEehawhD4K4hUEuB4ZNqe2kKg@mail.gmail.com>
Date: Tue, 26 Apr 2022 13:06:57 -0600
From: Peter Gonda <pgonda@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: John Sperbeck <jsperbeck@...gle.com>,
kvm list <kvm@...r.kernel.org>,
David Rientjes <rientjes@...gle.com>,
Sean Christopherson <seanjc@...gle.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock
On Thu, Apr 21, 2022 at 9:56 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 4/20/22 22:14, Peter Gonda wrote:
> >>>> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
> >>>> source and target vcpu->locks. Mark the nested subclasses to avoid false
> >>>> positives from lockdep.
> >> Nope. Good catch, I didn't realize there was a limit 8 subclasses:
> > Does anyone have thoughts on how we can resolve this vCPU locking with
> > the 8 subclass max?
>
> The documentation does not have anything. Maybe you can call
> mutex_release manually (and mutex_acquire before unlocking).
>
> Paolo
Hmm this seems to be working thanks Paolo. To lock I have been using:
...
if (mutex_lock_killable_nested(
&vcpu->mutex, i * SEV_NR_MIGRATION_ROLES + role))
goto out_unlock;
mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
...
To unlock:
...
mutex_acquire(&vcpu->mutex.dep_map, 0, 0, _THIS_IP_);
mutex_unlock(&vcpu->mutex);
...
If I understand correctly we are fully disabling lockdep by doing
this. If this is the case should I just remove all the '_nested' usage
so switch to mutex_lock_killable() and remove the per vCPU subclass?
Powered by blists - more mailing lists