lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikqd3VzLSkJfGoN0s29NzhkqJSYSPEEOS2s0TOn@mail.gmail.com>
Date:	Mon, 19 Jul 2010 13:32:24 -0700
From:	Divyesh Shah <dpshah@...gle.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Jeff Moyer <jmoyer@...hat.com>, linux-kernel@...r.kernel.org,
	axboe@...nel.dk, nauman@...gle.com, guijianfeng@...fujitsu.com,
	czoccolo@...il.com
Subject: Re: [PATCH 1/3] cfq-iosched: Improve time slice charging logic

On Mon, Jul 19, 2010 at 11:58 AM, Vivek Goyal <vgoyal@...hat.com> wrote:
> Yes it is mixed now for default CFQ case. Whereever we don't have the
> capability to determine the slice_used, we charge IOPS.
>
> For slice_idle=0 case, we should charge IOPS almost all the time. Though
> if there is a workload where single cfqq can keep the request queue
> saturated, then current code will charge in terms of time.
>
> I agree that this is little confusing. May be in case of slice_idle=0
> we can always charge in terms of IOPS.

I agree with Jeff that this is very confusing. Also there are
absolutely no bets that one job may end up getting charged in IOPs for
this behavior while other jobs continue getting charged in timefor
their IOs. Depending on the speed of the disk, this could be a huge
advantage or disadvantage for the cgroup being charged in IOPs.

It should be black or white, time or IOPs and also very clearly called
out not just in code comments but in the Documentation too.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ