[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1217870748.24714.79.camel@koto.keithp.com>
Date: Mon, 04 Aug 2008 10:25:48 -0700
From: Keith Packard <keithp@...thp.com>
To: Hugh Dickins <hugh@...itas.com>
Cc: keithp@...thp.com, Nick Piggin <nickpiggin@...oo.com.au>,
Christoph Hellwig <hch@...radead.org>,
Eric Anholt <eric@...olt.net>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Export shmem_file_setup and shmem_getpage for DRM-GEM
On Mon, 2008-08-04 at 18:09 +0100, Hugh Dickins wrote:
> Whether such usage conforms to VFS API I'm not so sure: as I understand
> it, it's really for internal use by a filesystem
Sure, but presumably it could even be used by a layered file system?
> - if it's going to be
> used beyond that, we ought to add a check that the filesystem it's used
> upon really has a ->readpage method (and I'd rather we add such a check
> than you do it at your end, in case we change the implementation later
> to use something other than a ->readpage method - Nick, you'll be
> nauseated to hear I was looking to see if ->fault with a pseudo-vma
> could do it). But if the layering police are happy with this, I am.
It seems like I should put a check into my code that is kernel version
dependent so that I can't oops if someone tries to use a filesystem that
doesn't have ->readpage.
> But that route is in
> use and well-tested, and only an inefficiency when swapping, so should
> not cause you any problems.
Yeah, swapping performance isn't my primary concern; I looked through
the read_mapping_page codepath and it looked exactly like my existing
code in the fast path, which is why I was able to just delete all of
that from my driver and just call read_mapping_page.
So, when I release the pages from the page cache, I'm currently calling
mark_page_accessed for all pages, and set_page_dirty for pages which may
have been written by the GPU. Are those calls still needed?
--
keith.packard@...el.com
Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)
Powered by blists - more mailing lists