[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yqexcdad6oghl8sv@dev-arch.thelio-3990X>
Date: Mon, 13 Jun 2022 14:51:45 -0700
From: Nathan Chancellor <nathan@...nel.org>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, Tom Rix <trix@...hat.com>,
kvm@...r.kernel.org, llvm@...ts.linux.dev,
linux-kernel@...r.kernel.org, Peter Gonda <pgonda@...gle.com>
Subject: Re: [PATCH] KVM: SVM: Hide SEV migration lockdep goo behind
CONFIG_DEBUG_LOCK_ALLOC
On Mon, Jun 13, 2022 at 09:42:37PM +0000, Sean Christopherson wrote:
> Wrap the manipulation of @role and the manual mutex_{release,acquire}()
> invocations in CONFIG_PROVE_LOCKING=y to squash a clang-15 warning. When
> building with -Wunused-but-set-parameter and CONFIG_DEBUG_LOCK_ALLOC=n,
> clang-15 seees there's no usage of @role in mutex_lock_killable_nested()
> and yells. PROVE_LOCKING selects DEBUG_LOCK_ALLOC, and the only reason
> KVM manipulates @role is to make PROVE_LOCKING happy.
>
> To avoid true ugliness, use "i" and "j" to detect the first pass in the
> loops; the "idx" field that's used by kvm_for_each_vcpu() is guaranteed
> to be '0' on the first pass as it's simply the first entry in the vCPUs
> XArray, which is fully KVM controlled. kvm_for_each_vcpu() passes '0'
> for xa_for_each_range()'s "start", and xa_for_each_range() will not enter
> the loop if there's no entry at '0'.
>
> Fixes: 0c2c7c069285 ("KVM: SEV: Mark nested locking of vcpu->lock")
> Reported-by: kernel test robot <lkp@...el.com>
> Cc: Peter Gonda <pgonda@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>
> Compile tested only, still haven't figured out why SEV is busted on our
> test systems with upstream kernels. I also haven't verified this squashes
> the clang-15 warning, so a thumbs up on that end would be helpful.
I can confirm that with the config that the kernel test robot provided,
the warning disappears with this patch. If it is useful:
Tested-by: Nathan Chancellor <nathan@...nel.org> # build
> arch/x86/kvm/svm/sev.c | 17 +++++++----------
> 1 file changed, 7 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 51fd985cf21d..309bcdb2f929 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1606,38 +1606,35 @@ static int sev_lock_vcpus_for_migration(struct kvm *kvm,
> {
> struct kvm_vcpu *vcpu;
> unsigned long i, j;
> - bool first = true;
>
> kvm_for_each_vcpu(i, vcpu, kvm) {
> if (mutex_lock_killable_nested(&vcpu->mutex, role))
> goto out_unlock;
>
> - if (first) {
> +#ifdef CONFIG_PROVE_LOCKING
> + if (!i)
> /*
> * Reset the role to one that avoids colliding with
> * the role used for the first vcpu mutex.
> */
> role = SEV_NR_MIGRATION_ROLES;
> - first = false;
> - } else {
> + else
> mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
> - }
> +#endif
> }
>
> return 0;
>
> out_unlock:
>
> - first = true;
> kvm_for_each_vcpu(j, vcpu, kvm) {
> if (i == j)
> break;
>
> - if (first)
> - first = false;
> - else
> +#ifdef CONFIG_PROVE_LOCKING
> + if (j)
> mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_);
> -
> +#endif
>
> mutex_unlock(&vcpu->mutex);
> }
>
> base-commit: 8baacf67c76c560fed954ac972b63e6e59a6fba0
> --
> 2.36.1.476.g0c4daa206d-goog
>
>
Powered by blists - more mailing lists