[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220405174637.2074319-1-pgonda@google.com>
Date: Tue, 5 Apr 2022 10:46:37 -0700
From: Peter Gonda <pgonda@...gle.com>
To: kvm@...r.kernel.org
Cc: Peter Gonda <pgonda@...gle.com>,
John Sperbeck <jsperbeck@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH v2] KVM: SEV: Mark nested locking of vcpu->mutex
svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
source and target vcpu->mutex. Mark the nested subclasses to avoid false
positives from lockdep.
Warning example:
============================================
WARNING: possible recursive locking detected
5.17.0-dbg-DEV #15 Tainted: G O
--------------------------------------------
sev_migrate_tes/18859 is trying to acquire lock:
ffff8d672d484238 (&vcpu->mutex){+.+.}-{3:3}, at: sev_lock_vcpus_for_migration+0x7e/0x150
but task is already holding lock:
ffff8d67703f81f8 (&vcpu->mutex){+.+.}-{3:3}, at: sev_lock_vcpus_for_migration+0x7e/0x150
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&vcpu->mutex);
lock(&vcpu->mutex);
*** DEADLOCK ***
May be due to missing lock nesting notation
3 locks held by sev_migrate_tes/18859:
#0: ffff9302f91323b8 (&kvm->lock){+.+.}-{3:3}, at: sev_vm_move_enc_context_from+0x96/0x740
#1: ffff9302f906a3b8 (&kvm->lock/1){+.+.}-{3:3}, at: sev_vm_move_enc_context_from+0xae/0x740
#2: ffff8d67703f81f8 (&vcpu->mutex){+.+.}-{3:3}, at: sev_lock_vcpus_for_migration+0x7e/0x150
Fixes: b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration")
Reported-by: John Sperbeck<jsperbeck@...gle.com>
Suggested-by: David Rientjes <rientjes@...gle.com>
Suggested-by: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Signed-off-by: Peter Gonda <pgonda@...gle.com>
---
Tested by running sev_migrate_tests with lockdep enabled. Before we see
a warning from sev_lock_vcpus_for_migration(). After we get no warnings.
---
arch/x86/kvm/svm/sev.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 75fa6dd268f0..673e1ee2cfc9 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1591,14 +1591,21 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
atomic_set_release(&src_sev->migration_in_progress, 0);
}
+#define SEV_MIGRATION_SOURCE 0
+#define SEV_MIGRATION_TARGET 1
-static int sev_lock_vcpus_for_migration(struct kvm *kvm)
+/*
+ * To avoid lockdep warnings callers should pass @vm argument with either
+ * SEV_MIGRATION_SOURCE or SEV_MIGRATE_TARGET. This allows subclassing of all
+ * vCPU mutex locks.
+ */
+static int sev_lock_vcpus_for_migration(struct kvm *kvm, int vm)
{
struct kvm_vcpu *vcpu;
unsigned long i, j;
kvm_for_each_vcpu(i, vcpu, kvm) {
- if (mutex_lock_killable(&vcpu->mutex))
+ if (mutex_lock_killable_nested(&vcpu->mutex, i * 2 + vm))
goto out_unlock;
}
@@ -1745,10 +1752,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
charged = true;
}
- ret = sev_lock_vcpus_for_migration(kvm);
+ ret = sev_lock_vcpus_for_migration(kvm, SEV_MIGRATION_SOURCE);
if (ret)
goto out_dst_cgroup;
- ret = sev_lock_vcpus_for_migration(source_kvm);
+ ret = sev_lock_vcpus_for_migration(source_kvm, SEV_MIGRATION_TARGET);
if (ret)
goto out_dst_vcpu;
--
2.35.1.1094.g7c7d902a7c-goog
Powered by blists - more mailing lists