[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877dl71r8e.ffs@nanos.tec.linutronix.de>
Date: Mon, 12 Apr 2021 17:01:53 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Matthew Wilcox <willy@...radead.org>
Cc: neilb@...e.de, peterz@...radead.org, mingo@...hat.com,
will@...nel.org, longman@...hat.com, boqun.feng@...il.com,
bigeasy@...utronix.de, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 02/17] locking: Add split_lock
On Mon, Apr 12 2021 at 15:45, Matthew Wilcox wrote:
> On Mon, Apr 12, 2021 at 04:29:28PM +0200, Thomas Gleixner wrote:
>> On Fri, Apr 09 2021 at 03:51, Matthew Wilcox wrote:
>> > Bitlocks do not currently participate in lockdep. Conceptually, a
>> > bit_spinlock is a split lock, eg across each bucket in a hash table.
>> > The struct split_lock gives us somewhere to record the lockdep_map.
>>
>> I like the concept, but the name is strange. The only purpose of
>>
>> > +struct split_lock {
>> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
>> > + struct lockdep_map dep_map;
>> > +#endif
>> > +};
>>
>> is to have a place to stick the lockdep map into. So it's not a lock
>> construct as the name suggests, it's just auxiliary data when lockdep is
>> enabled.
>
> That's the implementation _today_, but conceptually, it's a single lock.
> I was thinking that for non-RT, we could put a qspinlock in there for a
> thread to spin on if the bit is contended. It'd need a bit of ingenuity
> to make sure that a thread unlocking a bitlock made sure that a thread
> spinning on the qspinlock saw the wakeup, but it should be doable.
Ah, that's what you have in mind.
> Anyway, from the point of view of the user, they should be declaring
> "this is the lock", not "this is the lock tracking structure", right?
>
>> I know you hinted that RT could make use of that data structure and the
>> fact that it's mandatory for the various lock functions, but that's not
>> really feasible if this is related to a hash with a bit spinlock per
>> bucket as the data structure is hash global.
>>
>> Sure, we can do pointer math to find out the bucket index and do
>> something from there, but I'm not sure whether that really helps. Need
>> to stare at the remaining few places where bit spinlocks are an issue on
>> RT.
>
> I obviously don't understand exactly what the RT patchset does. My
> thinking was that you could handle the bit locks like rw sems, and
> sacrifice the scalability of per-bucket-lock for the determinism of
> a single PI lock.
That'd suck for most bit spinlocks where the lock is just protecting
minimal hashlist operations and these preeempt disabled protections are
actually shorter than the overhead of a heavier lock.
For situations where the bit spinlock was actually an issue (long
traversals or such) in older kernel versions we just bit the bullet and
bloated the hash data structure with an actual spinlock and had some
wrappers to hide the mess from the actual code. That still preserved the
scalability while making the lock held section preemptible which we
obviously cannot have with real bit spinlocks even on RT.
But your idea of having a qspinlock for the contended case might
actually be something which might be worth to exploit RT wise -
obviously not with a qspinlock :) - but conceptually.
Need to think more about it.
Thanks,
tglx
Powered by blists - more mailing lists