[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPm50a+gcug5XOsg_Z=7R+3j+VUxHMrzyGNbps7-okR625KB_w@mail.gmail.com>
Date:   Fri, 7 Oct 2022 23:56:47 +0800
From:   Hao Peng <flyingpenghao@...il.com>
To:     pbonzini@...hat.com
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH] kvm: x86: Keep the lock order consistent
From: Peng Hao <flyingpeng@...cent.com>
srcu read side in critical section may sleep, so it should precede
the read lock, while other paths such as kvm_xen_set_evtchn_fast
execute srcu_read_lock before acquiring the read lock.
Signed-off-by: Peng Hao <flyingpeng@...cent.com>
---
 arch/x86/kvm/xen.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 280cb5dc7341..fa6e54b13afb 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -965,8 +965,8 @@ static bool wait_pending_event(struct kvm_vcpu
*vcpu, int nr_ports,
        bool ret = true;
        int idx, i;
-       read_lock_irqsave(&gpc->lock, flags);
        idx = srcu_read_lock(&kvm->srcu);
+       read_lock_irqsave(&gpc->lock, flags);
        if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
                goto out_rcu;
@@ -987,9 +987,8 @@ static bool wait_pending_event(struct kvm_vcpu
*vcpu, int nr_ports,
        }
  out_rcu:
-       srcu_read_unlock(&kvm->srcu, idx);
        read_unlock_irqrestore(&gpc->lock, flags);
-
+       srcu_read_unlock(&kvm->srcu, idx);
        return ret;
 }
--
2.27.0
Powered by blists - more mailing lists
 
