[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121216024520.GH9806@dastard>
Date: Sun, 16 Dec 2012 13:45:20 +1100
From: Dave Chinner <david@...morbit.com>
To: Eric Wong <normalperson@...t.net>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fadvise: perform WILLNEED readahead in a workqueue
On Sat, Dec 15, 2012 at 12:54:48AM +0000, Eric Wong wrote:
> Applications streaming large files may want to reduce disk spinups and
> I/O latency by performing large amounts of readahead up front.
> Applications also tend to read files soon after opening them, so waiting
> on a slow fadvise may cause unpleasant latency when the application
> starts reading the file.
>
> As a userspace hacker, I'm sometimes tempted to create a background
> thread in my app to run readahead(). However, I believe doing this
> in the kernel will make life easier for other userspace hackers.
>
> Since fadvise makes no guarantees about when (or even if) readahead
> is performed, this change should not hurt existing applications.
>
> "strace -T" timing on an uncached, one gigabyte file:
>
> Before: fadvise64(3, 0, 0, POSIX_FADV_WILLNEED) = 0 <2.484832>
> After: fadvise64(3, 0, 0, POSIX_FADV_WILLNEED) = 0 <0.000061>
You've basically asked fadvise() to readahead the entire file if it
can. That means it is likely to issue enough readahead to fill the
IO queue, and that's where all the latency is coming from. If all
you are trying to do is reduce the latency of the first read, then
only readahead the initial range that you are going to need to read...
Also, Pushing readahead off to a workqueue potentially allows
someone to DOS the system because readahead won't ever get throttled
in the syscall context...
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists