lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Apr 2022 14:18:09 -0600
From:   Peter Gonda <pgonda@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     John Sperbeck <jsperbeck@...gle.com>,
        kvm list <kvm@...r.kernel.org>,
        David Rientjes <rientjes@...gle.com>,
        Sean Christopherson <seanjc@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock

On Wed, Apr 27, 2022 at 10:04 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 4/26/22 21:06, Peter Gonda wrote:
> > On Thu, Apr 21, 2022 at 9:56 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
> >>
> >> On 4/20/22 22:14, Peter Gonda wrote:
> >>>>>> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
> >>>>>> source and target vcpu->locks. Mark the nested subclasses to avoid false
> >>>>>> positives from lockdep.
> >>>> Nope. Good catch, I didn't realize there was a limit 8 subclasses:
> >>> Does anyone have thoughts on how we can resolve this vCPU locking with
> >>> the 8 subclass max?
> >>
> >> The documentation does not have anything.  Maybe you can call
> >> mutex_release manually (and mutex_acquire before unlocking).
> >>
> >> Paolo
> >
> > Hmm this seems to be working thanks Paolo. To lock I have been using:
> >
> > ...
> >                    if (mutex_lock_killable_nested(
> >                                &vcpu->mutex, i * SEV_NR_MIGRATION_ROLES + role))
> >                            goto out_unlock;
> >                    mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
> > ...
> >
> > To unlock:
> > ...
> >                    mutex_acquire(&vcpu->mutex.dep_map, 0, 0, _THIS_IP_);
> >                    mutex_unlock(&vcpu->mutex);
> > ...
> >
> > If I understand correctly we are fully disabling lockdep by doing
> > this. If this is the case should I just remove all the '_nested' usage
> > so switch to mutex_lock_killable() and remove the per vCPU subclass?
>
> Yes, though you could also do:
>
>         bool acquired = false;
>         kvm_for_each_vcpu(...) {
>                 if (acquired)
>                         mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
>                 if (mutex_lock_killable_nested(&vcpu->mutex, role)
>                         goto out_unlock;
>                 acquired = true;
>                 ...
>
> and to unlock:
>
>         bool acquired = true;
>         kvm_for_each_vcpu(...) {
>                 if (!acquired)
>                         mutex_acquire(&vcpu->mutex.dep_map, 0, role, _THIS_IP_);
>                 mutex_unlock(&vcpu->mutex);
>                 acquired = false;
>         }
>
> where role is either 0 or SINGLE_DEPTH_NESTING and is passed to
> sev_{,un}lock_vcpus_for_migration.
>
> That coalesces all the mutexes for a vm in a single subclass, essentially.

Ah thats a great idea to allow for lockdep to work still. I'll try
that out, thanks again Paolo.

>
> Paolo
>

Powered by blists - more mailing lists