lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080821060418.GC5706@disturbed>
Date:	Thu, 21 Aug 2008 16:04:18 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Szabolcs Szakacsits <szaka@...s-3g.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	xfs@....sgi.com
Subject: Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous
	snapshotting file system)

On Thu, Aug 21, 2008 at 03:15:08PM +1000, Dave Chinner wrote:
> On Thu, Aug 21, 2008 at 05:46:00AM +0300, Szabolcs Szakacsits wrote:
> > On Thu, 21 Aug 2008, Dave Chinner wrote:
> > Everything is default.
> > 
> >   % rpm -qf =mkfs.xfs
> >   xfsprogs-2.9.8-7.1 
> > 
> > which, according to ftp://oss.sgi.com/projects/xfs/cmd_tars, is the 
> > latest stable mkfs.xfs. Its output is
> > 
> > meta-data=/dev/sda8              isize=256    agcount=4, agsize=1221440 blks
> >          =                       sectsz=512   attr=2
> > data     =                       bsize=4096   blocks=4885760, imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096  
> > log      =internal log           bsize=4096   blocks=2560, version=2
> >          =                       sectsz=512   sunit=0 blks, lazy-count=0
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> Ok, I thought it might be the tiny log, but it didn't improve anything
> here when increased the log size, or the log buffer size.

One thing I just found out - my old *laptop* is 4-5x faster than the
10krpm scsi disk behind an old cciss raid controller.  I'm wondering
if the long delays in dispatch is caused by an interaction with CTQ
but I can't change it on the cciss raid controllers. Are you using
ctq/ncq on your machine?  If so, can you reduce the depth to
something less than 4 and see what difference that makes?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ