[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120211021724.GO19392@google.com>
Date: Fri, 10 Feb 2012 18:17:24 -0800
From: Tejun Heo <tj@...nel.org>
To: Shaohua Li <shli@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, Vivek Goyal <vgoyal@...hat.com>,
lkml <linux-kernel@...r.kernel.org>,
Knut Petersen <Knut_Petersen@...nline.de>, mroos@...ux.ee,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] block: strip out locking optimization in
put_io_context()
Hello,
On Fri, Feb 10, 2012 at 04:48:49PM +0800, Shaohua Li wrote:
> >> Can you please test the following one? It's probably the simplest
> >> version w/o RCU and wq deferring. RCUfying isn't too bad but I'm
> >> still a bit hesitant because RCU coverage needs to be extended to
> >> request_queue via conditional synchronize_rcu() in queue exit path
> >> (can't enforce delayed RCU free on request_queues and unconditional
> >> synchronize_rcu() may cause excessive delay during boot for certain
> >> configurations). It now can be done in the block core layer proper so
> >> it shouldn't be as bad tho. If this too flops, I'll get to that.
> > doesn't work.
> I added trace in the schedule_work code path of put_io_context, which
> runs very rare. So it's not lock contention for sure.
> Sounds the only difference between the good/bad cases is the good
> case runs with rcu_lock_read/rcu_read_unlock. I also checked slab
> info, the cfq related slab doesn't use too many memory, unlikely
> because rcu latency uses too many memory.
Yeah, that makes much more sense. It just isn't hot enough path for
this sort of micro locking changes to matter. I think the problem is
that, after the change, the cfqq aren't being expired immediately on
task exit. ie. While moving the cic destruction to release path, I
accidentally removed exit notification to cfq. I'll come up with a
fix.
Thank you!
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists