[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F36526B.7070809@kernel.dk>
Date: Sat, 11 Feb 2012 12:35:07 +0100
From: Jens Axboe <axboe@...nel.dk>
To: Tejun Heo <tj@...nel.org>
CC: Shaohua Li <shli@...nel.org>, Vivek Goyal <vgoyal@...hat.com>,
lkml <linux-kernel@...r.kernel.org>,
Knut Petersen <Knut_Petersen@...nline.de>, mroos@...ux.ee,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] block: strip out locking optimization in put_io_context()
On 2012-02-11 03:17, Tejun Heo wrote:
> Hello,
>
> On Fri, Feb 10, 2012 at 04:48:49PM +0800, Shaohua Li wrote:
>>>> Can you please test the following one? It's probably the simplest
>>>> version w/o RCU and wq deferring. RCUfying isn't too bad but I'm
>>>> still a bit hesitant because RCU coverage needs to be extended to
>>>> request_queue via conditional synchronize_rcu() in queue exit path
>>>> (can't enforce delayed RCU free on request_queues and unconditional
>>>> synchronize_rcu() may cause excessive delay during boot for certain
>>>> configurations). It now can be done in the block core layer proper so
>>>> it shouldn't be as bad tho. If this too flops, I'll get to that.
>>> doesn't work.
>> I added trace in the schedule_work code path of put_io_context, which
>> runs very rare. So it's not lock contention for sure.
>> Sounds the only difference between the good/bad cases is the good
>> case runs with rcu_lock_read/rcu_read_unlock. I also checked slab
>> info, the cfq related slab doesn't use too many memory, unlikely
>> because rcu latency uses too many memory.
>
> Yeah, that makes much more sense. It just isn't hot enough path for
> this sort of micro locking changes to matter. I think the problem is
> that, after the change, the cfqq aren't being expired immediately on
> task exit. ie. While moving the cic destruction to release path, I
> accidentally removed exit notification to cfq. I'll come up with a
> fix.
Was just thinking about that last night, the missing slice expire on
task exit makes a LOT more sense than changed locking.
I'm pushing off what I have to Linus today, since I'll be gone skiing
next week. I will check email regularly and be able to apply patches and
so forth, just a heads up on availability.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists