lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100726134329.GB12449@redhat.com>
Date:	Mon, 26 Jul 2010 09:43:29 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Heinz Diehl <htd@...cy-poultry.org>
Cc:	linux-kernel@...r.kernel.org, jaxboe@...ionio.com,
	nauman@...gle.com, dpshah@...gle.com, guijianfeng@...fujitsu.com,
	jmoyer@...hat.com, czoccolo@...il.com,
	Christoph Hellwig <hch@...radead.org>
Subject: Re: [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new
 group_idle tunable

On Sat, Jul 24, 2010 at 10:06:13AM +0200, Heinz Diehl wrote:
> On 23.07.2010, Vivek Goyal wrote: 
> 
> > Anyway, for fs_mark problem, can you give following patch a try.
> > https://patchwork.kernel.org/patch/113061/
> 
> Ported it to 2.6.35-rc6, and these are my results using the same fs_mark
> call as before:
> 
> slice_idle = 0
> 
> FSUse%        Count         Size    Files/sec     App Overhead
>     28         1000        65536        241.6            39574
>     28         2000        65536        231.1            39939
>     28         3000        65536        230.4            39722
>     28         4000        65536        243.2            39646
>     28         5000        65536        227.0            39892
>     28         6000        65536        224.1            39555
>     28         7000        65536        228.2            39761
>     28         8000        65536        235.3            39766
>     28         9000        65536        237.3            40518
>     28        10000        65536        225.7            39861
>     28        11000        65536        227.2            39441
> 
> 
> slice_idle = 8
> 
> FSUse%        Count         Size    Files/sec     App Overhead
>     28         1000        65536        502.2            30545
>     28         2000        65536        407.6            29406
>     28         3000        65536        381.8            30152
>     28         4000        65536        438.1            30038
>     28         5000        65536        447.5            30477
>     28         6000        65536        422.0            29610
>     28         7000        65536        383.1            30327
>     28         8000        65536        415.3            30102
>     28         9000        65536        397.6            31013
>     28        10000        65536        401.4            29201
>     28        11000        65536        408.8            29720
>     28        12000        65536        391.2            29157
> 
> Huh...there's quite a difference! It's definitely the slice_idle settings
> which affect the results here.

In this case it is not slice_idle. This patch puts both fsync writer and
jbd thread on same service tree. That way once fsync writer is done there
is no idling after that and jbd thread almost immediately gets to dispatch
requests to disk hence we see improved throughput.

> Besides, this patch gives noticeably bad desktop interactivity on my system.
> 

How do you measure it? IOW, are you running something else also on the
desktop in the background. Like a heavy writer etc and then measuring
how interactive desktop feels?

> Don't know if this is related, but I'm not quite shure if XFS (which I use
> exclusively) uses the jbd/jbd2 journaling layer at all.

I also don't know. But because this patch is making a difference with your
XFS file system performance, may be it does use.

CCing Christoph, he can tell us.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ