[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1271995995.2855.48.camel@faldara>
Date: Fri, 23 Apr 2010 00:13:15 -0400
From: Phillip Susi <psusi@....rr.com>
To: Jamie Lokier <jamie@...reable.org>
Cc: linux-fsdevel@...r.kernel.org,
Linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: readahead on directories
On Thu, 2010-04-22 at 23:43 +0100, Jamie Lokier wrote:
> No, that is not the reason. pwrite needs the mutex too.
Which mutex and what for?
> Now you are describing using threads in the blocking cases. (Work
> queues, thread pools, same thing.) Earlier you were saying threads
> are the wrong approach.... Good, good :-)
Sure, in some cases, just not ALL. If you can't control whether or not
the call blocks then you HAVE to use threads. If you can be sure it
won't block most of the time, then most of the time you don't need any
other threads, and when you finally do, you need very few.
> A big problem with it, apart from having to change lots of places in
> all the filesystems, is that the work-queues run with the wrong
> security and I/O context. Network filesystems break permissions, quotas
> break, ionice doesn't work, etc. It's obviously fixable but more
> involved than just putting a read request on a work queue.
Hrm... good point.
> Fine-grained locking isn't the same thing as using non-sleepable locks.
Yes, it is not the same, but non-sleepable locks can ONLY be used with
fine grained locks. The two reasons to use a mutex instead of a spin
lock are that you can sleep while holding it, and so it isn't a problem
to hold it for an extended period of time.
> So is read(). And then the calling application usually exits, because
> there's nothing else it can do usefully. Same if aio_read() ever returns ENOMEM.
>
> That way lies an application getting ENOMEM often and having to retry
> aio_read in a loop, probably a busy one, which isn't how the interface
> is supposed to work, and is not efficient either.
Simply retrying in a loop would be very stupid. The programs using aio
are not simple stupid, so they would take more appropriate action. For
example a server might decide it already has enough data in the pipe and
forget about asking for more until the queues empty, or it might decide
to drop that client, which would free up some more memory, or it might
decide it has some cache it can free up. Something like readahead could
decide that if there isn't enough memory left then it has no business
trying to read any more, and exit. Both of these are preferable to
waiting for something else to free up enough memory to continue.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists