[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110117142623.GC5624@redhat.com>
Date: Mon, 17 Jan 2011 09:26:23 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Shaohua Li <shaohua.li@...el.com>
Cc: lkml <linux-kernel@...r.kernel.org>,
Jens Axboe <jaxboe@...ionio.com>,
"jmoyer@...hat.com" <jmoyer@...hat.com>,
Corrado Zoccolo <czoccolo@...il.com>,
Gui Jianfeng <guijianfeng@...fujitsu.com>
Subject: Re: [PATCH 1/2]block cfq: make queue preempt work for queues from
different workload
On Tue, Jan 11, 2011 at 04:51:56PM +0800, Shaohua Li wrote:
> I got this:
> fio-874 [007] 2157.724514: 8,32 m N cfq874 preempt
> fio-874 [007] 2157.724519: 8,32 m N cfq830 slice expired t=1
> fio-874 [007] 2157.724520: 8,32 m N cfq830 sl_used=1 disp=0 charge=1 iops=0 sect=0
> fio-874 [007] 2157.724521: 8,32 m N cfq830 set_active wl_prio:0 wl_type:0
> fio-874 [007] 2157.724522: 8,32 m N cfq830 Not idling. st->count:1
> cfq830 is an async queue, and preempted by a sync queue cfq874. But since we
> have cfqg->saved_workload_slice mechanism, the preempt is a nop.
> Looks currently our preempt is totally broken if the two queues are not from
> the same workload type.
> Below patch fixes it. This will might make async queue starvation, but it's
> what our old code does before cgroup is added.
I am not sure how good a idea this is. So effectively now if there is a
preemtion across workload, the workload being preempted can lose its
share completely and be starved. So it is not just about async WRITES.
sync-noidle can starve sync-idle workload too as meta data request will
preempt any regular sync-idle queue and then sync-idle queue workload
can be starved. So this is not exactly going back to old CFQ behavior
but something more than that too.
On one hand you are going to great lengths to fix the issues where if a
cfqq is preempted, it does not lose its share (patch 2 in the series) and
on the other hand you don't mind all the queues in a workload losing their
share due to preemption.
I think atleast we should limit this to async workload. Or solve the
problem by giving even smaller slice length to async workload.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists