lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 28 Jan 2011 09:33:02 -0500
From:	Mark Lord <kernel@...savvy.com>
To:	Dave Chinner <david@...morbit.com>
CC:	Stan Hoeppner <stan@...dwarefreak.com>,
	Justin Piszcz <jpiszcz@...idpixels.com>,
	Christoph Hellwig <hch@...radead.org>,
	Alex Elder <aelder@....com>,
	Linux Kernel <linux-kernel@...r.kernel.org>, xfs@....sgi.com
Subject: Re: xfs: very slow after mount, very slow at umount

On 11-01-28 02:31 AM, Dave Chinner wrote:
>
> A simple google search turns up discussions like this:
> 
> http://oss.sgi.com/archives/xfs/2009-01/msg01161.html

"in the long term we still expect fragmentation to degrade the performance of
XFS file systems"
Other than that, no hints there about how changing agcount affects things.


> Configuring XFS filesystems for optimal performance has always been
> a black art because it requires you to understand your storage, your
> application workload(s) and XFS from the ground up.  Most people
> can't even tick one of those boxes, let alone all three....

Well, I've got 2/3 of those down just fine, thanks.
But it's the "XFS" part that is still the "black art" part,
because so little is written about *how* it works
(as opposed to how it is laid out on disk).

Again, that's only a minor complaint -- XFS is way better documented
than the alternatives, and also works way better than the others I've
tried here on this workload.

>>> Why 8 AGs and not the default?
>>
>> How AGs are used is not really explained anywhere I've looked,
>> so I am guessing at what they do and how the system might respond
>> to different values there (that documentation thing again).
> 
> Section 5.1 of this 1996 whitepaper tells you what allocation groups
> are and the general allocation strategy around them:
> 
> http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html

Looks a bit dated:  "Allocation groups are typically 0.5 to 4 gigabytes in size."
But it does suggest that "processes running concurrently can allocate space
in the file system concurrently without interfering with each other".

Dunno if that's still true today, but it sounds pretty close to what I was
theorizing about how it might work.

> start to see what I mean about tuning XFS really being a "black art"?

No, I've seen that "black" (aka. undefined, undocumented) part from the start.  :)
Thanks for chipping in here, though -- it's been really useful.

Cheers!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ