lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinTiEAFG1F1df380BiDtVFVr=nCsSqhM9__XdQ4@mail.gmail.com>
Date:	Tue, 22 Mar 2011 10:39:36 -0700
From:	Chad Talbott <ctalbott@...gle.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	jaxboe@...ionio.com, linux-kernel@...r.kernel.org,
	mrubin@...gle.com, teravest@...gle.com
Subject: Re: [PATCH 0/3] cfq-iosched: Fair cross-group preemption

On Tue, Mar 22, 2011 at 8:09 AM, Vivek Goyal <vgoyal@...hat.com> wrote:
> Why not just implement simply RT class groups and always allow an RT
> group to preempt an BE class. Same thing we do for cfq queues. I will
> not worry too much about a run away application consuming all the
> bandwidth. If that's a concern we could use blkio controller to limit
> the IO rate of a latency sensitive applicaiton to make sure it does
> not starve BE applications.

That is not quite the same semantics.  This limited preemption patch
is still work-conserving.  If the RT task in the only task on the
system with IO, it will be able to use all available disk time.

> If RT starving BE is an issue, then it is an issue with plain cfq queue
> also. First we shall have to fix it there.
>
> This definition that a latency sensitive task get prioritized only
> till it is consuming its fair share and if task starts using more than
> fair share then CFQ automatically stops prioritizing it sounds little
> odd to me. If you are looking for predictability, then we lost it. We
> shall have to very well know that task is not eating more than its
> fair share before we can gurantee any kind of latencies to that task. And
> if we know that task is not hogging the disk, there is anyway no risk
> of it starving other groups/tasks completely.

In a shared environment, we have to be a little bit defensive.  We
hope that a latency sensitive task is well characterized and won't
exceed its share of the disk, and that we haven't over-committed the
disk.  If the app does do more IO than expected, then we'd like them
to bear the burden.  We have a choice of two outcomes.  A single job
sometimes failing to achieve low disk latency when it's very busy.  Or
all jobs on a disk sometimes being very slow when another (unrelated)
job is very busy.  The first is easier to understand and debug.

Chad
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ