[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e8e933ab37f84ac68ac70f4b1ed8d524@AcuMS.aculab.com>
Date: Mon, 18 Mar 2024 09:47:02 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Guo Hui' <guohui@...ontech.com>, "peterz@...radead.org"
<peterz@...radead.org>, "mingo@...hat.com" <mingo@...hat.com>,
"will@...nel.org" <will@...nel.org>, "longman@...hat.com"
<longman@...hat.com>, "boqun.feng@...il.com" <boqun.feng@...il.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2] locking/osq_lock: Optimize osq_lock performance using
per-NUMA
From: Guo Hui
> Sent: 18 March 2024 05:50
>
> Changes in version v1:
> The queue is divided according to NUMA nodes,
> but the tail of each NUMA node is still stored
> in the structure optimistic_spin_queue.
The description should be before any 'changes'.
The changes between versions don't go into the commit message.
Does this change affect a real workload, or just some benchmark?
In reality you don't want a lot of threads waiting on a single
lock (of any kind).
So if a real workload is getting a long queue of waiters on
an OSQ lock then the underlying code really needs fixing to
'not do that' (either by changing the way the lock is held
or acquired).
The whole osq lock is actually quite strange.
(I worked out how it all worked a while ago.)
It is an ordered queue of threads waiting for the thread
spinning on a mutex/rwlock to either obtain the mutex or
to give up spinning and sleep.
I suspect that the main benefit over spinning on the mutex
itself is the fact that it is ordered.
It also remove the 'herd of wildebeest' doing a cmpxchg - but
one will win and the others do back to a non-locked poll.
Are the gains you are seeing from the osq-lock code itself,
or because the thread that ultimately holds the mutex is running
on the same NUMA node as the previous thread than held the mutex?
One thing I did notice is if the process holding the mutex
sleeps there is no way to get all the osq spinners to
sleep at once. They each obtain the osq-lock, realise the
need to sleep, and release it in turn.
That is going to take a while with a long queue.
I didn't look at the mutex/rwlock code (I'm sure they
could be a lot more common - a mutex is a rwlock that
only has writers!) but if one thread detects that it
needs to be pre-empted it takes itself out of the osq-lock
and, presumably, sleeps on the mutex.
Unless that stops any other threads being added to the osq-lock
wont it get completely starved?
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists