lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Apr 2014 09:23:41 +0200
From:	Ivan Pantovic <gyro.ivan@...il.com>
To:	Dave Chinner <david@...morbit.com>
CC:	Speedy Milan <speedy.milan@...il.com>,
	linux-kernel@...r.kernel.org, xfs@....sgi.com
Subject: Re: rm -f * on large files very slow on XFS + MD RAID 6 volume of
 15x 4TB of HDDs (52TB)


> [root@...ve-b ~]# xfs_db -r /dev/md0
> xfs_db> frag
> actual 11157932, ideal 11015175, fragmentation factor 1.28%
> xfs_db>

this is current level of fragmentation ... is it bad?

some say over 1% is candidate for defrag? ...

we can leave it like this and wait for a next full backup and then check 
on the fragmentation of that file.

On 04/23/2014 04:18 AM, Dave Chinner wrote:
> [cc xfs@....sgi.com]
>
> On Mon, Apr 21, 2014 at 10:58:53PM +0200, Speedy Milan wrote:
>> I want to report very slow deletion of 24 50GB files (in total 12 TB),
>> all present in the same folder.
> total = 1.2TB?
>
>> OS is CentOS 6.4, with upgraded kernel 3.13.1.
>>
>> The hardware is a Supermicro server with 15x 4TB WD Se drives in MD
>> RAID 6, totalling 52TB of free space.
>>
>> XFS is formated directly on the RAID volume, without LVM layers.
>>
>> Deletion was done with rm -f * command, and it took upwards of 1 hour
>> to delete the files.
>>
>> File system was filled completely prior to deletion.
> Oh, that's bad. it's likely you fragmented the files into
> millions of extents?
>
>> rm was mostly waiting (D state), probably for kworker threads, and
> No, waiting for IO.
>
>> iostat was showing big HDD utilization numbers and very low throughput
>> so it looked like a random HDD workload was in effect.
> Yup, smells like file fragmentation. Non-fragmented 50GB files
> should be removed in a few milliseconds. but if you've badly
> fragmented the files, there could be 10 million extents in a 50GB
> file. A few milliseconds per extent removal gives you....
>
> Cheers,
>
> Dave.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ