lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1244164487.2560.146.camel@ymzhang>
Date:	Fri, 05 Jun 2009 09:14:47 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Jens Axboe <jens.axboe@...cle.com>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, tytso@....edu,
	chris.mason@...cle.com, david@...morbit.com, hch@...radead.org,
	akpm@...ux-foundation.org, jack@...e.cz, richard@....demon.co.uk,
	damien.wyart@...e.fr
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v9

On Thu, 2009-06-04 at 17:20 +0200, Frederic Weisbecker wrote:
> Hi,
> 
> 
> On Thu, May 28, 2009 at 01:46:33PM +0200, Jens Axboe wrote:
> > Hi,
> > 
> > Here's the 9th version of the writeback patches. Changes since v8:

> I've just tested it on UP in a single disk.
> 
> I've run two parallels dbench tests on two partitions and
> tried it with this patch and without.
I also tested V9 with multiple-dbench workload by starting multiple
dbench tasks and every task has 4 processes to do I/O on one partition (file
system). Mostly I use JBODs which have 7/11/13 disks.

I didn't find result regression between vanilla and V9 kernel on this workload.

> 
> I used 30 proc each during 600 secs.
> 
> You can see the result in attachment.
> And also there:
> 
> http://kernel.org/pub/linux/kernel/people/frederic/dbench.pdf
> http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda1.log
> http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda3.log
> http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda1.log
> http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda3.log
> 
> 
> As you can see, bdi writeback is faster than pdflush on hda1 and slower
> on hda3. But, well that's not the point.
> 
> What I can observe here is the difference on the standard deviation
> for the rate between two parallel writers on a same device (but
> two different partitions, then superblocks).
> 
> With pdflush, the distributed rate is much better balanced than
> with bdi writeback in a single device.
> 
> I'm not sure why. Is there something in these patches that makes
> several bdi flusher threads for a same bdi not well balanced
> between them?
> 
> Frederic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ