[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0901060948220.3057@localhost.localdomain>
Date: Tue, 6 Jan 2009 10:02:56 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
cc: Matthew Wilcox <matthew@....cx>, Andi Kleen <andi@...stfloor.org>,
Chris Mason <chris.mason@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Gregory Haskins <ghaskins@...ell.com>,
Nick Piggin <npiggin@...e.de>
Subject: Re: [PATCH][RFC]: mutex: adaptive spin
Ok, last comment, I promise.
On Tue, 6 Jan 2009, Peter Zijlstra wrote:
> @@ -175,11 +199,19 @@ __mutex_lock_common(struct mutex *lock,
> debug_mutex_free_waiter(&waiter);
> return -EINTR;
> }
> - __set_task_state(task, state);
>
> - /* didnt get the lock, go to sleep: */
> + owner = lock->owner;
> + get_task_struct(owner);
> spin_unlock_mutex(&lock->wait_lock, flags);
> - schedule();
> +
> + if (adaptive_wait(&waiter, owner, state)) {
> + put_task_struct(owner);
> + __set_task_state(task, state);
> + /* didnt get the lock, go to sleep: */
> + schedule();
> + } else
> + put_task_struct(owner);
> +
> spin_lock_mutex(&lock->wait_lock, flags);
So I really dislike the whole get_task_struct/put_task_struct thing. It
seems very annoying. And as far as I can tell, it's there _only_ to
protect "task->rq" and nothing else (ie to make sure that the task
doesn't exit and get freed and the pointer now points to la-la-land).
Wouldn't it be much nicer to just cache the rq pointer (take it while
still holding the spinlock), and then pass it in to adaptive_wait()?
Then, adaptive_wait() can just do
if (lock->owner != owner)
return 0;
if (rq->task != owner)
return 1;
Sure - the owner may have rescheduled to another CPU, but if it did that,
then we really might as well sleep. So we really don't need to dereference
that (possibly stale) owner task_struct at all - because we don't care.
All we care about is whether the owner is still busy on that other CPU
that it was on.
Hmm? So it looks to me that we don't really need that annoying "try to
protect the task pointer" crud. We can do the sufficient (and limited)
sanity checking without the task even existing, as long as we originally
load the ->rq pointer at a point where it was stable (ie inside the
spinlock, when we know that the task must be still alive since it owns the
lock).
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists