[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090107232354.GM6900@linux.vnet.ibm.com>
Date: Wed, 7 Jan 2009 15:23:54 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Gregory Haskins <ghaskins@...ell.com>
Cc: Andi Kleen <andi@...stfloor.org>, Matthew Wilcox <matthew@....cx>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Chris Mason <chris.mason@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Nick Piggin <npiggin@...e.de>,
Peter Morreale <pmorreale@...ell.com>,
Sven Dietrich <SDietrich@...ell.com>
Subject: Re: [PATCH -v5][RFC]: mutex: implement adaptive spinning
On Wed, Jan 07, 2009 at 05:28:12PM -0500, Gregory Haskins wrote:
> Andi Kleen wrote:
> >> I appreciate this is sample code, but using __get_user() on
> >> non-userspace pointers messes up architectures which have separate
> >> user/kernel spaces (eg the old 4G/4G split for x86-32). Do we have an
> >> appropriate function for kernel space pointers?
> >>
> >
> > probe_kernel_address().
> >
> > But it's slow.
> >
> > -Andi
> >
> >
>
> Can I ask a simple question in light of all this discussion?
>
> "Is get_task_struct() really that bad?"
>
> I have to admit you guys have somewhat lost me on some of the more
> recent discussion, so its probably just a case of being naive on my
> part...but this whole thing seems to have become way more complex than
> it needs to be. Lets boil this down to the core requirements: We need
> to know if the owner task is still running somewhere in the system as a
> predicate to whether we should sleep or spin, period. Now the question
> is how to do that.
>
> The get/put task is the obvious answer to me (as an aside, we looked at
> task->oncpu rather than the rq->curr stuff which I believe was better),
> and I am inclined to think that is perfectly reasonable way to do this:
> After all, even if acquiring a reference is somewhat expensive (which I
> don't really think it is on a modern processor), we are already in the
> slowpath as it is and would sleep otherwise.
>
> Steve proposed a really cool trick with RCU since we know that the task
> cannot release while holding the lock, and the pointer cannot go away
> without waiting for a grace period. It turned out to introduce latency
> side-effects so it ultimately couldn't be used (and this was in no way
> a knock against RCU or you, Paul..just wasn't the right tool for the job
> it turned out).
Too late...
I already figured out a way to speed up preemptable RCU's read-side
primitives (to about as fast as CONFIG_PREEMPT RCU's read-side primitives)
and also its grace-period latency. And it is making it quite clear that
it won't let go of my brain until I implement it... ;-)
Thanx, Paul
> Ok, so onto other ideas. What if we simply look at something like a
> scheduling sequence id. If we know (within the wait-lock) that task X
> is the owner and its on CPU A, then we can simply monitor if A context
> switches. Having something like rq[A]->seq++ every time we schedule()
> would suffice and you wouldnt need to hold a task reference...just note
> A=X->cpu from inside the wait-lock. I guess the downside there is
> putting that extra increment in the schedule() hotpath even if no-one
> cares, but I would surmise that should be reasonably cheap when no-one
> is pulling the cacheline around other than A (i.e. no observers).
>
> But anyway, my impression from observing the direction this discussion
> has taken is that it is being way way over optimized before we even know
> if a) the adaptive stuff helps, and b) the get/put-ref hurts. Food for
> thought.
>
> -Greg
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists