[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0901151208310.6528@localhost.localdomain>
Date: Thu, 15 Jan 2009 12:13:17 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Chris Mason <chris.mason@...cle.com>
cc: Ingo Molnar <mingo@...e.hu>, Matthew Wilcox <matthew@....cx>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Gregory Haskins <ghaskins@...ell.com>,
Andi Kleen <andi@...stfloor.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Nick Piggin <npiggin@...e.de>,
Peter Morreale <pmorreale@...ell.com>,
Sven Dietrich <SDietrich@...ell.com>,
Dmitry Adamushko <dmitry.adamushko@...il.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [GIT PULL] adaptive spinning mutexes
On Thu, 15 Jan 2009, Chris Mason wrote:
> On Thu, 2009-01-15 at 10:16 -0800, Linus Torvalds wrote:
> >
> > Umm. Except if you wrote the code nicely and used spinlocks, you wouldn't
> > hold the lock over all those unnecessary and complex operations.
>
> While this is true, there are examples of places we should expect
> speedups for this today.
Sure. There are cases where we do have to use sleeping things, because the
code is generic and really can't control what lower levels do, and those
lower levels have to be able to sleep.
So:
> Concurrent file creation/deletion in a single dir will often find things
> hot in cache and not have to block anywhere (mail spools).
The inode->i_mutex thing really does need to use a mutex, and spinning
will help. Of course, it should only help when you really have lots of
concurrent create/delete/readdir in the same directory, and that hopefully
is a very rare load in real life, but hey, it's a valid one.
> Concurrent O_DIRECT aio writes to the same file, where i_mutex is
> dropped early on.
Won't the actual IO costs generally dominate in everything but trivial
benchmarks?
> pipes should see a huge improvement.
Hmm. Pipes may be interesting, but on the other hand, the cases that would
see huge improvements would tend to be the cases where the biggest
performance gain is from running both sides on the same CPU. The only case
where a pipe gets really contended is when both producer and consumer
basically do nothing with the data, so the biggest costs is the copy in
kernel space (read: pure benchmarking, no real load), and then you often
get better performance by scheduling on a single CPU due to cache effects
and no lock bouncing.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists