[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <ye8wsmnezq9.fsf@camel05.daimi.au.dk>
Date: 24 Apr 2008 22:59:10 +0200
From: Soeren Sandmann <sandmann@...mi.au.dk>
To: linux-kernel@...r.kernel.org
Subject: stat benchmark
At
http://www.daimi.au.dk/~sandmann/stat-benchmark.c
there is a simple program that will measure the time it takes to stat
every file in the current directory with a cold cache.
This is essentially what applications do in a number of common cases
such as "ls -l", nautilus opening a directory, or an "open file"
dialog being showed.
Unfortunately, performance of that operation kinda sucks. On my system
(ext3), it produces:
c-24-61-65-93:~% sudo ./a.out
Time to readdir(): 0.307671 s
Time to stat 2349 files: 8.203693 s
8 seconds is about 80 times slower than what a user perceives as
"instantly" and slow enough that we really should display a progress
bar if it can't be fixed.
So I am looking for ways to improve this.
Under the theory that disk seeks are killing us, one idea is to add a
'multistat' system call that would allow statting of many files at a
time, which would give the disk scheduler more to work with.
Possibly the same thing would need to be done for the getxattr
information.
Does this sound like a reasonable idea?
Thanks,
Soren
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists