[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinO1JsX_3NDHS=ew5u2C7VcCHT1Y1Yabnz=RQLO@mail.gmail.com>
Date: Thu, 27 Jan 2011 20:14:50 -0800
From: David Rees <drees76@...il.com>
To: Mark Lord <kernel@...savvy.com>
Cc: Dave Chinner <david@...morbit.com>,
Stan Hoeppner <stan@...dwarefreak.com>,
Justin Piszcz <jpiszcz@...idpixels.com>,
Christoph Hellwig <hch@...radead.org>,
Alex Elder <aelder@....com>,
Linux Kernel <linux-kernel@...r.kernel.org>, xfs@....sgi.com
Subject: Re: xfs: very slow after mount, very slow at umount
On Thu, Jan 27, 2011 at 5:22 PM, Mark Lord <kernel@...savvy.com> wrote:
> But I just don't know. My working theory, likely entirely wrong,
> is that if I have N streams active, odds are that each of those
> streams might get assigned to different AGs, given sufficient AGs >= N.
>
> Since the box often has 3-7 recording streams active,
> I'm trying it out with 8 AGs now.
As suggested before - why are you messing with AGs instead of allocsize?
I suspect that with the default configuration, XFS was trying to
maximize throughput by reducing seeks with multiple processes writing
streams.
But now, you're telling XFS that it's OK to write in up to 8 different
locations on the disk without worrying about seek performance.
I think this is likely to result in overall worse performance at the
worst time - under write load.
If you are trying to optimize single thread read performance by
minimizing file fragments, why don't you simply figure out at what
point increasing allocsize stops increasing read performance?
I suspect that the the defaults do good job because even if your file
are fragmented in 64MB chunks because you have multiple streams
writing, those chunks are very likely to be very close together so
there isn't much of a seek penalty.
-Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists