lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPm50aJPpzWz1nnBu6vhcac2kwKq29h-oq7jZtKz23XJ46LW0g@mail.gmail.com>
Date:   Mon, 31 Oct 2022 15:20:07 +0800
From:   Hao Peng <flyingpenghao@...il.com>
To:     pbonzini@...hat.com
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Sean Christopherson <seanjc@...gle.com>
Subject: [RESEND PATCH v2] kvm: x86: Keep the lock order consistent

From: Peng Hao <flyingpeng@...cent.com>

Acquire SRCU before taking the gpc spinlock in wait_pending_event() so as
 to be consistent with all other functions that acquire both locks.  It's
 not illegal to acquire SRCU inside a spinlock, nor is there deadlock
 potential, but in general it's preferable to order locks from least
 restrictive to most restrictive, e.g. if wait_pending_event() needed to
 sleep for whatever reason, it could do so while holding SRCU, but would
 need to drop the spinlock.

Thanks Sean Christopherson for the comment.

Signed-off-by: Peng Hao <flyingpeng@...cent.com>
---
 arch/x86/kvm/xen.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 2dae413bd62a..766e8a4ca3ea 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -964,8 +964,8 @@ static bool wait_pending_event(struct kvm_vcpu
*vcpu, int nr_ports,
        bool ret = true;
        int idx, i;

-       read_lock_irqsave(&gpc->lock, flags);
        idx = srcu_read_lock(&kvm->srcu);
+       read_lock_irqsave(&gpc->lock, flags);
        if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
                goto out_rcu;

@@ -986,8 +986,8 @@ static bool wait_pending_event(struct kvm_vcpu
*vcpu, int nr_ports,
        }

  out_rcu:
-       srcu_read_unlock(&kvm->srcu, idx);
        read_unlock_irqrestore(&gpc->lock, flags);
+       srcu_read_unlock(&kvm->srcu, idx);

        return ret;
 }
--
2.27.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ