[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1203710145.4772.107.camel@sven.thebigcorporation.com>
Date: Fri, 22 Feb 2008 11:55:45 -0800
From: Sven-Thorsten Dietrich <sdietrich@...ell.com>
To: paulmck@...ux.vnet.ibm.com
Cc: "Bill Huey (hui)" <bill.huey@...il.com>, Andi Kleen <ak@...e.de>,
Gregory Haskins <ghaskins@...ell.com>, mingo@...e.hu,
a.p.zijlstra@...llo.nl, tglx@...utronix.de, rostedt@...dmis.org,
linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org,
kevin@...man.org, cminyard@...sta.com, dsingleton@...sta.com,
dwalker@...sta.com, npiggin@...e.de, dsaxena@...xity.net,
gregkh@...e.de, pmorreale@...ell.com, mkohari@...ell.com
Subject: Re: [PATCH [RT] 08/14] add a loop counter based timeout mechanism
On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
> On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> > On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <bill.huey@...il.com> wrote:
> > > Yeah, I'm not very keen on having a constant there without some
> > > contention instrumentation to see how long the spins are. It would be
> > > better to just let it run until either task->on_cpu is off or checking
> > > if the "current" in no longer matches the mutex owner for the runqueue
> > > in question. At that point, you know the thread isn't running.
> > > Spinning on something like that is just a waste of time. It's for that
> > > reason that doing in the spin outside of a preempt critical section
> > > isn't really needed
> >
> > Excuse me, I meant to say "...isn't a problem".
>
> The fixed-time spins are very useful in cases where the critical section
> is almost always very short but can sometimes be very long. In such
> cases, you would want to spin until either ownership changes or it is
> apparent that the current critical-section instance will be long.
>
> I believe that there are locks in the Linux kernel that have this
> "mostly short but sometimes long" hold-time property.
In regards to this "mostly short but sometimes long" question,
for very large SMP systems, running with some profiling enabled, might
allow the system to adapt to varying workloads and therefore shifting
lock contention / hold-times.
Overall utilization despite the overhead might be lower, but this is
tbd.
In high-contention, short-hold time situations, it may even make sense
to have multiple CPUs with multiple waiters spinning, depending on
hold-time vs. time to put a waiter to sleep and wake them up.
The wake-up side could also walk ahead on the queue, and bring up
spinners from sleeping, so that they are all ready to go when the lock
flips green for them.
But in more simple cases, there should be a simple, default timeout
governed by context switch overhead or as defined by a derived number of
cache misses, as you suggested.
Sven
> Thanx, Paul
> -
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists