[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120208164920.GB19392@google.com>
Date: Wed, 8 Feb 2012 08:49:20 -0800
From: Tejun Heo <tj@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Shaohua Li <shaohua.li@...el.com>, Jens Axboe <axboe@...nel.dk>,
Vivek Goyal <vgoyal@...hat.com>,
lkml <linux-kernel@...r.kernel.org>,
Knut Petersen <Knut_Petersen@...nline.de>, mroos@...ux.ee
Subject: Re: [PATCH] block: strip out locking optimization in
put_io_context()
Hello, Linus.
On Wed, Feb 08, 2012 at 08:34:53AM -0800, Linus Torvalds wrote:
> On Wed, Feb 8, 2012 at 8:29 AM, Tejun Heo <tj@...nel.org> wrote:
> >
> > Can you please try the following one? Thanks a lot!
>
> If you can use it as a rwlock, why can't you do it with RCU?
The original locking scheme was using RCU which was very fragile and
broken on corner cases. The locking restructuring was aimed to make
things simpler. While the double locking isn't trivial, it's much
easier to grasp and get right than RCU. We might have to revive RCU
if the regression can't be tackled otherwise and it probably is
possible to do it simpler. Let's see.
> Usually rwlocks are a bad idea. They tend to be more expensive than
> spinlocks, and the extra parallelism is almost never noticeable
> (except as "more cacheline bounces") for something that is appropriate
> for a non-sleeping lock.
>
> There's a *very* few situations where rwlock is the right thing, but
> it really almost always is a horribly bad idea.
I'm still a bit lost on where the regression is coming from and
*suspecting* that queue_lock contention is making the reverse locking
behave much worse than expected, so I mostly wanted to take that out
and see what happens. rwlock might increase locking overhead per try
but it avoids unlock/lock dancing. I'll try to reproduce the
regression in a few days and do better analysis.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists