[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AA6AF58.3050501@redhat.com>
Date: Tue, 08 Sep 2009 15:24:08 -0400
From: Rik van Riel <riel@...hat.com>
To: Ryo Tsuruta <ryov@...inux.co.jp>
CC: vgoyal@...hat.com, linux-kernel@...r.kernel.org,
dm-devel@...hat.com, jens.axboe@...cle.com, agk@...hat.com,
akpm@...ux-foundation.org, nauman@...gle.com,
guijianfeng@...fujitsu.com, jmoyer@...hat.com,
balbir@...ux.vnet.ibm.com
Subject: Re: Regarding dm-ioband tests
Ryo Tsuruta wrote:
> Rik van Riel <riel@...hat.com> wrote:
>> Are you saying that dm-ioband is purposely unfair,
>> until a certain load level is reached?
>
> Not unfair, dm-ioband(weight policy) is intentionally designed to
> use bandwidth efficiently, weight policy tries to give spare bandwidth
> of inactive groups to active groups.
This sounds good, except that the lack of anticipation
means that a group with just one task doing reads will
be considered "inactive" in-between reads.
This means writes can always get in-between two reads,
sometimes multiple writes at a time, really disadvantaging
a group that is doing just disk reads.
This is a problem, because reads are generally more time
sensitive than writes.
>>> We regarded reducing throughput loss rather than reducing duration
>>> as the design of dm-ioband. Of course, it is possible to make a new
>>> policy which reduces duration.
>> ... while also reducing overall system throughput
>> by design?
>
> I think it reduces system throughput compared to the current
> implementation, because it causes more overhead to do fine grained
> control.
Except that the io scheduler based io controller seems
to be able to enforce fairness while not reducing
throughput.
Dm-ioband would have to address these issues to be a
serious contender, IMHO.
--
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists