[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1270818802.6627.13.camel@hermosa.site>
Date: Fri, 09 Apr 2010 07:13:22 -0600
From: "Peter W. Morreale" <pmorreale@...ell.com>
To: Darren Hart <dvhltc@...ibm.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Eric Dumazet <eric.dumazet@...il.com>,
Rik van Riel <riel@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Gregory Haskins <ghaskins@...ell.com>,
Sven-Thorsten Dietrich <sdietrich@...ell.com>,
Chris Mason <chris.mason@...cle.com>,
John Cooper <john.cooper@...rd-harmonic.com>,
Chris Wright <chrisw@...s-sol.org>,
Avi Kivity <avi@...hat.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH 4/6] futex: Add FUTEX_LOCK with optional adaptive
spinning
On Thu, 2010-04-08 at 22:41 -0700, Darren Hart wrote:
> Peter W. Morreale wrote:
> > On Wed, 2010-04-07 at 20:25 -0700, Darren Hart wrote:
> >> Thomas Gleixner wrote:
> >>> On Wed, 7 Apr 2010, Darren Hart wrote:
> >>>> Thomas Gleixner wrote:
>
> >>>>> if ((curval & FUTEX_TID_MASK) != ownertid) {
> >>>>> ownertid = curval & FUTEX_TID_MASK;
> >>>>> owner = update_owner(ownertid);
> >>>>> }
> >>>> Hrm... at this point the owner has changed... so we should break and go
> >>>> to sleep, not update the owner and start spinning again. The
> >>>> futex_spin_on_owner() will detect this and abort, so I'm not seeing the
> >>>> purpose of the above if() block.
> >>> Why ? If the owner has changed and the new owner is running on another
> >>> cpu then why not spin further ?
> >> That's an interesting question, and I'm not sure what the right answer
> >> is. The current approach of the adaptive spinning in the kernel is to
> >> spin until the owner changes or deschedules, then stop and block. The
> >> idea is that if you didn't get the lock before the owner changed, you
> >> aren't going to get it in a very short period of time (you have at least
> >> an entire critical section to wait through plus whatever time you've
> >> already spent spinning). However, blocking just so another task can spin
> >> doesn't really make sense either, and makes the lock less fair than it
> >> could otherwise be.
> >
> > Not only less fair, but potentially could cause starvation, no? Perhaps
> > you could see this if you changed your model to allow all contended
> > tasks to spin instead of just one.
>
> Agreed, and V5 (just posted) does just that.
>
> >
> > If a spinning task blocks because of an owner change, and a new task
> > enters and starts spinning directly after the owner change, at what
> > point does the original task get woken up?
>
> At the time of unlock the owner will have to call FUTEX_WAKE. This task
> will wake and attempt to acquire the lock - it will lose races with
> aclready running contenders. Lock stealing, adaptive spinning, etc are
> all going to lead to less fair locks in exchange for throughput.
nod. My only point was that if only one can spin, and sleeps when the
owner is not on CPU, then you virtually guarantee worst case performance
under certain conditions (ie: multiple threads coming in at various
times)
If all spin, and they sleep only on owner block, then the 'unfairness'
is potentially more evenly distributed via hardware (and stealing) and
when the owner is blocked.
Best,
-PWM
>
> > Its likely that the new
> > spinner will get the resource next, no? Rinse/repeat with another task
> > and the original spinner is starved.
> >
> > (Or am I missing something? My understanding was that unfairness was
> > built-in to this algo... If so, then the above is a possibility, right?)
>
> Yes it is. These locks are typically used in situations where it is more
> important that some work gets completed than _which_ work gets completed.
>
> Thanks,
>
> --
> Darren Hart
> IBM Linux Technology Center
> Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists