lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090604201012.GD11363@kernel.dk>
Date:	Thu, 4 Jun 2009 22:10:12 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	tytso@....edu, chris.mason@...cle.com, david@...morbit.com,
	hch@...radead.org, jack@...e.cz, yanmin_zhang@...ux.intel.com,
	richard@....demon.co.uk, damien.wyart@...e.fr
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v9

On Thu, Jun 04 2009, Jens Axboe wrote:
> On Thu, Jun 04 2009, Frederic Weisbecker wrote:
> > On Thu, Jun 04, 2009 at 12:07:26PM -0700, Andrew Morton wrote:
> > > On Thu, 4 Jun 2009 17:20:44 +0200 Frederic Weisbecker <fweisbec@...il.com> wrote:
> > > 
> > > > I've just tested it on UP in a single disk.
> > > 
> > > I must say, I'm stunned at the amount of testing which people are
> > > performing on this patchset.  Normally when someone sends out a
> > > patchset it just sort of lands with a dull thud.
> > > 
> > > I'm not sure what Jens did right to make all this happen, but thanks!
> > 
> > 
> > I don't know how he did either. I was reading theses patches and *something*
> > pushed me to my testbox, and then I tested...
> > 
> > Jens, how do you do that?
> 
> Heh, not sure :-)
> 
> But indeed, thanks for the testing. It looks quite interesting. I'm
> guessing it probably has to do with who ends up doing the balancing and
> that the flusher threads block, it may change the picture a bit. So it
> may just be that it'll require a few vm tweaks. I'll definitely look
> into it and try and reproduce your results.
> 
> Did you run it a 2nd time on each drive and check if the results were
> (approximately) consistent on the two drives?

each partition... What IO scheduler did you use on hda?

The main difference with this test case is that before we had two super
blocks, each with lists of dirty inodes. pdflush would attack those. Now
we have both the inodes from the two supers on a single set of lists on
the bdi. So either we have some ordering issue there (which is causing
the unfairness), or something else is.

So perhaps you can try with noop on hda to see if that changes the
picture?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ