lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Jun 2009 17:20:44 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	tytso@....edu, chris.mason@...cle.com, david@...morbit.com,
	hch@...radead.org, akpm@...ux-foundation.org, jack@...e.cz,
	yanmin_zhang@...ux.intel.com, richard@....demon.co.uk,
	damien.wyart@...e.fr
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v9

Hi,


On Thu, May 28, 2009 at 01:46:33PM +0200, Jens Axboe wrote:
> Hi,
> 
> Here's the 9th version of the writeback patches. Changes since v8:
> 
> - Fix a bdi_work on-stack allocation hang. I hope this fixes Ted's
>   issue.
> - Get rid of the explicit wait queues, we can just use wake_up_process()
>   since it's just for that one task.
> - Add separate "sync_supers" thread that makes sure that the dirty
>   super blocks get written. We cannot safely do this from bdi_forker_task(),
>   as that risks deadlocking on ->s_umount. Artem, I implemented this
>   by doing the wake ups from a timer so that it would be easier for you
>   to just deactivate the timer when there are no super blocks.
> 
> For ease of patching, I've put the full diff here:
> 
>   http://kernel.dk/writeback-v9.patch
> 
> and also stored this in a writeback-v9 branch that will not change,
> you can pull that into Linus tree from here:
> 
>   git://git.kernel.dk/linux-2.6-block.git writeback-v9
> 
>  block/blk-core.c            |    1 +
>  drivers/block/aoe/aoeblk.c  |    1 +
>  drivers/char/mem.c          |    1 +
>  fs/btrfs/disk-io.c          |   24 +-
>  fs/buffer.c                 |    2 +-
>  fs/char_dev.c               |    1 +
>  fs/configfs/inode.c         |    1 +
>  fs/fs-writeback.c           |  804 ++++++++++++++++++++++++++++-------
>  fs/fuse/inode.c             |    1 +
>  fs/hugetlbfs/inode.c        |    1 +
>  fs/nfs/client.c             |    1 +
>  fs/ntfs/super.c             |   33 +--
>  fs/ocfs2/dlm/dlmfs.c        |    1 +
>  fs/ramfs/inode.c            |    1 +
>  fs/super.c                  |    3 -
>  fs/sync.c                   |    2 +-
>  fs/sysfs/inode.c            |    1 +
>  fs/ubifs/super.c            |    1 +
>  include/linux/backing-dev.h |   73 ++++-
>  include/linux/fs.h          |   11 +-
>  include/linux/writeback.h   |   15 +-
>  kernel/cgroup.c             |    1 +
>  mm/Makefile                 |    2 +-
>  mm/backing-dev.c            |  518 ++++++++++++++++++++++-
>  mm/page-writeback.c         |  151 +------
>  mm/pdflush.c                |  269 ------------
>  mm/swap_state.c             |    1 +
>  mm/vmscan.c                 |    2 +-
>  28 files changed, 1286 insertions(+), 637 deletions(-)
> 


I've just tested it on UP in a single disk.

I've run two parallels dbench tests on two partitions and
tried it with this patch and without.

I used 30 proc each during 600 secs.

You can see the result in attachment.
And also there:

http://kernel.org/pub/linux/kernel/people/frederic/dbench.pdf
http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda1.log
http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda3.log
http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda1.log
http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda3.log


As you can see, bdi writeback is faster than pdflush on hda1 and slower
on hda3. But, well that's not the point.

What I can observe here is the difference on the standard deviation
for the rate between two parallel writers on a same device (but
two different partitions, then superblocks).

With pdflush, the distributed rate is much better balanced than
with bdi writeback in a single device.

I'm not sure why. Is there something in these patches that makes
several bdi flusher threads for a same bdi not well balanced
between them?

Frederic.

Download attachment "dbench.pdf" of type "application/pdf" (21887 bytes)

View attachment "bdi-writeback-hda1.log" of type "text/plain" (26598 bytes)

View attachment "bdi-writeback-hda3.log" of type "text/plain" (23517 bytes)

View attachment "pdflush-hda1.log" of type "text/plain" (28719 bytes)

View attachment "pdflush-hda3.log" of type "text/plain" (27800 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ