lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Jan 2011 12:11:54 -0800 (PST)
From:	david@...g.hm
To:	Stan Hoeppner <stan@...dwarefreak.com>
cc:	Mark Lord <kernel@...savvy.com>,
	Justin Piszcz <jpiszcz@...idpixels.com>,
	Christoph Hellwig <hch@...radead.org>,
	Alex Elder <aelder@....com>,
	Linux Kernel <linux-kernel@...r.kernel.org>, xfs@....sgi.com
Subject: Re: xfs: very slow after mount, very slow at umount

On Thu, 27 Jan 2011, Stan Hoeppner wrote:

>> Rather than hundreds or thousands of "tiny" MB sized extents.
>> I wonder what the best mkfs.xfs parameters might be to encourage that?
>
> You need to use the mkfs.xfs defaults for any single drive filesystem, and trust
> the allocator to do the right thing.  XFS uses variable size extents and the
> size is chosen dynamically--you don't have direct or indirect control of the
> extent size chosen for a given file or set of files AFAIK.
>
> As Dave Chinner is fond of pointing out, it's those who don't know enough about
> XFS and choose custom settings that most often get themselves into trouble (as
> you've already done once).  :)
>
> The defaults exist for a reason, and they weren't chosen willy nilly.  The vast
> bulk of XFS' configurability exists for tuning maximum performance on large to
> very large RAID arrays.  There isn't much, if any, additional performance to be
> gained with parameter tweaks on a single drive XFS filesystem.

how do I understand how to setup things on multi-disk systems? the 
documentation I've found online is not that helpful, and in some ways 
contradictory.

If there really are good rules for how to do this, it would be very 
helpful if you could just give mkfs.xfs the information about your system 
(this partition is on a 16 drive raid6 array) and have it do the right 
thing.

David Lang


> A brief explanation of agcount:  the filesystem is divided into agcount regions
> called allocation groups, or AGs.  The allocator writes to all AGs in parallel
> to increase performance.  With extremely fast storage (SSD, large high RPM RAID)
> this increases throughput as the storage can often sink writes faster than a
> serial writer can push data.  In your case, you have a single slow spindle with
> over 7,000 AGs.  Thus, the allocator is writing to over 7,000 locations on that
> single disk simultaneously, or, at least, it's trying to.  Thus, the poor head
> on that drive is being whipped all over the place without actually getting much
> writing done.  To add insults to injury, this is one of these low RPM low head
> performance "green" drives correct?
>
> Trust the defaults.  If they give you problems (unlikely) then we can't talk. ;)
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ