[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200904030335.13439.nickpiggin@yahoo.com.au>
Date: Fri, 3 Apr 2009 03:35:12 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: David Howells <dhowells@...hat.com>
Cc: viro@...iv.linux.org.uk, nfsv4@...ux-nfs.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 23/43] CacheFiles: Permit the page lock state to be monitored [ver #46]
On Friday 03 April 2009 03:14:52 David Howells wrote:
> Nick Piggin <nickpiggin@...oo.com.au> wrote:
>
> > I prefer to nack this because it is exporting details of the page
> > locking mechanism. unlock_page is very heavyweight in large part
> > because of the memory barriers and cacheline required to check the
> > waitqueue. I have patches to avoid all that if the page lock is
> > not contended.
> >
> > What's wrong with using wait_on_page_locked, like everyone else does?
>
> When fscache_read_or_alloc_pages() is called from, say, nfs_readpages(), and is
> given a few hundred pages to readahead from the cache, who does the
> wait_on_page_locked() on each of the _backing_ fs's pages?
Presumably: at the point where data is needed.
> The way I've arranged things to work is for the backing fs pages to be copied
> to the netfs pages and released in the order they're read from the disk.
>
> There's a small pool of threads that processes the pages. I don't want to have
> to create a thread for each readpages(), and I don't want readpages() to have
> to wait for all the requests it makes.
Or do you actually have numbers showing a problem if you just read the pages
then copy them?
If there is a problem, then why doesn't fscache_read_or_alloc_pages caller do
the work itself, then you get as many threads as you have indivisible work units,
so completing some part of the request before another wouldn't gain you anything
anyway...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists