lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Mar 2011 11:54:09 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Lina Lu <lulina_nuaa@...mail.com>
Cc:	linux kernel mailing list <linux-kernel@...r.kernel.org>
Subject: Re: cfq-iosched.c:Use cfqq->nr_sectors in charge the vdisktime

On Wed, Mar 30, 2011 at 11:23:30PM +0800, Lina Lu wrote:
> Hi Vivek,
>       I find the weight policy can be more accuracy with cfqq->nr_sectors instead
> of cfqq->slice_dispatch. 
>       Today, I try to modify cfq_group_served(), and use "charge = cfqq->nr_sectors; "
> instead of "charge = cfqq->slice_dispatch; " . The test result seens more accuracy.
> Why you choose slice_dispatch here? Is the nr_sectors will lower the total performance?

Lina,

CFQ fundamentally allocates time slices hence accounting is done in time
and not in terms of sectors. The other reason is that accounting in
terms of time can be more accurate where some process is seeking all
over the disk and doing little IO. If we account in terms of sectors
then such seeky process will get much more share.

>       And in iops mod, if I try to apply weight policy on two IO processes with different 
> avgrq-sz, the test results will not exact match the weight value.

IOPS mode kicks in when slice_idle=0. I suspect that group does not drive
enough IO to remain on service tree hence gets deleted and hence loses
share.

Can you run a 20 sec backtrace and upload it somewhere.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ