[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20111022074607.GA4720@localhost>
Date: Sat, 22 Oct 2011 15:46:07 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Jan Kara <jack@...e.cz>
Cc: "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Dave Chinner <david@...morbit.com>,
Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/7] writeback: avoid touching dirtied_when on blocked
inodes
On Sat, Oct 22, 2011 at 01:38:51PM +0800, Wu Fengguang wrote:
> On Sat, Oct 22, 2011 at 11:11:35AM +0800, Wu Fengguang wrote:
> > > > btw, with the I_SYNC case converted, it's actually no longer necessary
> > > > to keep a standalone b_more_io_wait. It should still be better to keep
> > > > the list and the above error check for catching possible errors and
> > > > the flexibility of adding policies like "don't retry possible blocked
> > > > inodes in N seconds as long as there are other inodes to work with".
> > > >
> > > > The below diff only intends to show the _possibility_ to remove
> > > > b_more_io_wait:
> > > Good observation. So should we introduce b_more_io_wait in the end? We
> > > could always introduce it when the need for some more complicated policy
> > > comes...
> > >
> >
> > I have no problem removing it if you liked it more. Anyway, let me
> > test the idea out first (just kicked off the tests).
>
> When removing b_more_io_wait, performance is slightly dropped
> comparing to the full more_io_wait patchset.
>
> 3.1.0-rc9-ioless-full-more_io_wait-next-20111014+ 3.1.0-rc9-ioless-full-more_io_wait-x-next-20111014+
> ------------------------ ------------------------
> 45.30 +6.3% 48.14 thresh=1G/ext3-1dd-4k-8p-4096M-1024M:10-X
> 48.23 -2.0% 47.27 thresh=1G/ext4-100dd-4k-8p-4096M-1024M:10-X
> 54.21 -2.6% 52.80 thresh=1G/ext4-10dd-4k-8p-4096M-1024M:10-X
> 56.07 -0.3% 55.91 thresh=1G/ext4-1dd-4k-8p-4096M-1024M:10-X
> 45.12 -5.8% 42.49 thresh=1G/xfs-100dd-4k-8p-4096M-1024M:10-X
> 53.94 -1.2% 53.27 thresh=1G/xfs-10dd-4k-8p-4096M-1024M:10-X
> 55.66 -0.1% 55.63 thresh=1G/xfs-1dd-4k-8p-4096M-1024M:10-X
> 358.53 -0.8% 355.51 TOTAL write_bw
>
> I'll try to reduce the changes and retest.
Unfortunately, the reduced patches 1-4 + I_SYNC change + remove
requeue_more_io_wait combination still performances noticeably worse:
3.1.0-rc9-ioless-full-next-20111014+ 3.1.0-rc9-ioless-full-more_io_wait-x2-next-20111014+
------------------------ ------------------------
49.84 -7.9% 45.91 thresh=1G/ext4-100dd-4k-8p-4096M-1024M:10-X
56.03 -7.2% 52.01 thresh=1G/ext4-10dd-4k-8p-4096M-1024M:10-X
57.42 -1.7% 56.45 thresh=1G/ext4-1dd-4k-8p-4096M-1024M:10-X
45.74 -2.8% 44.48 thresh=1G/xfs-100dd-4k-8p-4096M-1024M:10-X
54.19 -4.8% 51.57 thresh=1G/xfs-10dd-4k-8p-4096M-1024M:10-X
55.93 -2.2% 54.70 thresh=1G/xfs-1dd-4k-8p-4096M-1024M:10-X
319.14 -4.4% 305.12 TOTAL write_bw
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists