[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240522185421.424344-1-mjguzik@gmail.com>
Date: Wed, 22 May 2024 20:54:21 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: longman@...hat.com
Cc: peterz@...radead.org,
mingo@...hat.com,
will@...nel.org,
boqun.feng@...il.com,
linux-kernel@...r.kernel.org,
Mateusz Guzik <mjguzik@...il.com>
Subject: [PATCH] locking/rwsem: cpu_relax before re-reading owner in rwsem_spin_on_owner
The function starts with establishing whether there is an owner it can
spin waiting on and proceeds to immediately do it again when entering
the loop, adding another lock word access and possibly an avoidable
cacheline bounce. Subsequent iterations don't have this problem.
The sound thing to do is to cpu_relax() first.
Signed-off-by: Mateusz Guzik <mjguzik@...il.com>
---
This is a borderline cosmetic patch I did not bother benchmarking.
If you don't like it that's fine with me, I'm not going to fight for it.
Cheers.
kernel/locking/rwsem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index c6d17aee4209..a6c5bb68920e 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -758,6 +758,8 @@ rwsem_spin_on_owner(struct rw_semaphore *sem)
return state;
for (;;) {
+ cpu_relax();
+
/*
* When a waiting writer set the handoff flag, it may spin
* on the owner as well. Once that writer acquires the lock,
@@ -784,8 +786,6 @@ rwsem_spin_on_owner(struct rw_semaphore *sem)
state = OWNER_NONSPINNABLE;
break;
}
-
- cpu_relax();
}
return state;
--
2.39.2
Powered by blists - more mailing lists