[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1006241904160.2911@localhost.localdomain>
Date: Thu, 24 Jun 2010 20:15:54 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: npiggin@...e.de
cc: linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
John Stultz <johnstul@...ibm.com>,
Frank Mayhar <fmayhar@...gle.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [patch 05/52] lglock: introduce special lglock and brlock spin
locks
On Thu, 24 Jun 2010, npiggin@...e.de wrote:
> +#define DEFINE_LGLOCK(name) \
> + \
> + DEFINE_PER_CPU(arch_spinlock_t, name##_lock); \
Uuurgh. You want to make that an arch_spinlock ? Just to avoid the
preempt_count overflow when you lock all cpu locks nested ?
I'm really not happy about that, it's going to be a complete nightmare
for RT. If you wanted to make this a present for RT giving the
scalability stuff massive testing, then you failed miserably :)
I know how to fix it, but can't we go for an approach which
does not require massive RT patching again ?
struct percpu_lock {
spinlock_t lock;
unsigned global_state;
};
And let the lock function do:
spin_lock(&pcp->lock);
while (pcp->global_state)
cpu_relax();
So the global lock side can take each single lock, modify the percpu
"global state" and release the lock. On unlock you just need to reset
the global state w/o taking the percpu lock and be done.
I doubt that the extra conditional in the lock path is going to be
relevant overhead, compared to the spin_lock it's noise.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists