lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 Sep 2009 17:41:26 +0200
From:	Fabio Checconi <fchecconi@...il.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Rik van Riel <riel@...hat.com>, Ryo Tsuruta <ryov@...inux.co.jp>,
	linux-kernel@...r.kernel.org, dm-devel@...hat.com,
	jens.axboe@...cle.com, agk@...hat.com, akpm@...ux-foundation.org,
	nauman@...gle.com, guijianfeng@...fujitsu.com, jmoyer@...hat.com,
	balbir@...ux.vnet.ibm.com
Subject: Re: Regarding dm-ioband tests

> From: Vivek Goyal <vgoyal@...hat.com>
> Date: Tue, Sep 08, 2009 10:06:20PM -0400
>
> On Wed, Sep 09, 2009 at 02:09:00AM +0200, Fabio Checconi wrote:
> > Hi,
> > 
> > > From: Rik van Riel <riel@...hat.com>
> > > Date: Tue, Sep 08, 2009 03:24:08PM -0400
> > >
> > > Ryo Tsuruta wrote:
> > > >Rik van Riel <riel@...hat.com> wrote:
> > > 
> > > >>Are you saying that dm-ioband is purposely unfair,
> > > >>until a certain load level is reached?
> > > >
> > > >Not unfair, dm-ioband(weight policy) is intentionally designed to
> > > >use bandwidth efficiently, weight policy tries to give spare bandwidth
> > > >of inactive groups to active groups.
> > > 
> > > This sounds good, except that the lack of anticipation
> > > means that a group with just one task doing reads will
> > > be considered "inactive" in-between reads.
> > > 
> > 
> >   anticipation helps in achieving fairness, but CFQ currently disables
> > idling for nonrot+NCQ media, to avoid the resulting throughput loss on
> > some SSDs.  Are we really sure that we want to introduce anticipation
> > everywhere, not only to improve throughput on rotational media, but to
> > achieve fairness too?
> 
> That's a good point. Personally I think that fairness requirements for
> individual queues and groups are little different. CFQ in general seems
> to be focussing more on latency and throughput at the cost of fairness.
> 
> With groups, we probably need to put a greater amount of emphasis on group
> fairness. So group will be a relatively a slower entity (with anticiaption
> on and more idling), but it will also give you a greater amount of
> isolation. So in practice, one will create groups carefully and they will
> not proliferate like queues. This can mean overall reduced throughput on
> SSD.
> 

Ok, I personally agree on that, but I think it's something to be documented.


> Having said that, group idling is tunable and one can always reduce it to
> achieve a balance between fairness vs throughput depending on his need.
> 

This is good, however tuning will not be an easy task (at least, in my
experience with BFQ it has been a problem): while for throughput usually
there are tradeoffs, as soon as a queue/group idles and then timeouts,
from the fairness perspective the results soon become almost random
(i.e., depending on the rate of successful anticipations, but in the
common case they are unpredictable)...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ