lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 29 Apr 2022 09:35:45 -0600
From:   Peter Gonda <pgonda@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     John Sperbeck <jsperbeck@...gle.com>,
        kvm list <kvm@...r.kernel.org>,
        David Rientjes <rientjes@...gle.com>,
        Sean Christopherson <seanjc@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock

On Thu, Apr 28, 2022 at 5:59 PM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 4/28/22 23:28, Peter Gonda wrote:
> >
> > So when actually trying this out I noticed that we are releasing the
> > current vcpu iterator but really we haven't actually taken that lock
> > yet. So we'd need to maintain a prev_* pointer and release that one.
>
> Not entirely true because all vcpu->mutex.dep_maps will be for the same
> lock.  The dep_map is essentially a fancy string, in this case
> "&vcpu->mutex".
>
> See the definition of mutex_init:
>
> #define mutex_init(mutex)                                              \
> do {                                                                   \
>          static struct lock_class_key __key;                            \
>                                                                         \
>          __mutex_init((mutex), #mutex, &__key);                         \
> } while (0)
>
> and the dep_map field is initialized with
>
>          lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP);
>
> (i.e. all vcpu->mutexes share the same name and key because they have a
> single mutex_init-ialization site).  Lockdep is as crude in theory as it
> is effective in practice!
>
> >
> >           bool acquired = false;
> >           kvm_for_each_vcpu(...) {
> >                   if (!acquired) {
> >                      if (mutex_lock_killable_nested(&vcpu->mutex, role)
> >                          goto out_unlock;
> >                      acquired = true;
> >                   } else {
> >                      if (mutex_lock_killable(&vcpu->mutex, role)
> >                          goto out_unlock;
>
> This will cause a lockdep splat because it uses subclass 0.  All the
> *_nested functions is allow you to specify a subclass other than zero.

OK got it. I now have this to lock:

         kvm_for_each_vcpu (i, vcpu, kvm) {
                  if (prev_vcpu != NULL) {
                          mutex_release(&prev_vcpu->mutex.dep_map, _THIS_IP_);
                          prev_vcpu = NULL;
                  }

                  if (mutex_lock_killable_nested(&vcpu->mutex, role))
                          goto out_unlock;
                  prev_vcpu = vcpu;
          }

But I've noticed the unlocking is in the wrong order since we are
using kvm_for_each_vcpu() I think we need a kvm_for_each_vcpu_rev() or
something. Which maybe a bit for work:
https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L1119.
Then I think we could something like this to unlock:

         bool acquired = true;
          kvm_for_each_vcpu_rev(i, vcpu, kvm) {
                  if (!acquired) {
                          mutex_acquire(&vcpu->mutex.dep_map, 0, role,
_THIS_IP_);
                  }
                  mutex_unlock(&vcpu->mutex);
                  acquired = false;
          }
>
> Paolo
>
> >                   }
> >           }
> >
> > To unlock:
> >
> >           kvm_for_each_vcpu(...) {
> >              mutex_unlock(&vcpu->mutex);
> >           }
> >
> > This way instead of mocking and releasing the lock_dep we just lock
> > the fist vcpu with mutex_lock_killable_nested(). I think this
> > maintains the property you suggested of "coalesces all the mutexes for
> > a vm in a single subclass".  Thoughts?
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ