lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1227485956.5145.10.camel@chevrolet>
Date:	Mon, 24 Nov 2008 01:19:16 +0100
From:	Stian Jordet <liste@...det.net>
To:	Justin Piszcz <jpiszcz@...idpixels.com>
Cc:	linux-kernel@...r.kernel.org, xfs@....sgi.com
Subject: Re: Extreme slowness with xfs [WAS: Re: Slowness with new pc]

sø., 23.11.2008 kl. 17.25 -0500, skrev Justin Piszcz: 
> As the original posted stated:
> 
> 1. please post dmesg output
> 2. you may want to include your kernel .config
> 3. xfs_info /dev/mdX or /dev/device may also be useful
> 4. you can also check fragmentation:
>     # xfs_db -c frag -f /dev/md2
>     actual 257492, ideal 242687, fragmentation factor 5.75%
> 5. something sounds very strange, I also run XFS on a lot of systems and
>     have never heard of that before..
> 6. also post your /etc/fstab options
> 7. what distribution are you running?
> 8. are -only- the two fujitsu's (raid0) affected or are other arrays
>     affected on this HW as well (separate disks etc)?
> 9. you can also compile in support for latency_top & power_top to see
>     if there is any excessive polling going on by any one specific
>     device/function as well

1 & 2: Oh, sorry I forgot to attach dmesg and config in the last mail.

3:
root@...vrolet:~# xfs_info /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=32,
agsize=11426984 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=365663488,
imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

4:
root@...vrolet:~# xfs_db -c frag -f /dev/sdb1
actual 380037, ideal 373823, fragmentation factor 1.64%

6: The only mount-option is relatime, which Ubuntu adds automatically.
Hmm. I haven't tried to mount without that option. Well, didn't help
without it neither, tried just now.

7: Ubuntu 8.10 Intrepid. This is a new system, and it has never run
anything other than Intrepid. This affects both the standard kernel, and
the vanilla 2.6.27.7 that I have compiled (dmesg and config attached is
from that kernel). Have also tried both 64 bit and 32 bit (just for fun)

8: I'll explain my setup a little bit more. I explained the hardware in
my first post. But I have the two Fujitsu SAS disks in RAID-0,
with /dev/sda1 as root, and /dev/sda2 as home. Earlier they were both
xfs, and dog slow. I have now converted both to ext3, and everything is
normal. In addition I have four Seagate ST3500320AS 500GB SATA disks in
hardware RAID-5 on the same controller. This 1,5TB array is still xfs.
It also had and has the same symptoms. 

9: I don't know how to do that. But what ever it is, it doesn't happen
with ext3...

Thanks for looking into this!

Regards,
Stian

View attachment "config-2.6.27.7" of type "text/plain" (45922 bytes)

View attachment "dmesg.txt" of type "text/plain" (51912 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ