[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200808042043.46710.nickpiggin@yahoo.com.au>
Date: Mon, 4 Aug 2008 20:43:46 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Keith Packard <keithp@...thp.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Eric Anholt <eric@...olt.net>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Export shmem_file_setup and shmem_getpage for DRM-GEM
On Monday 04 August 2008 20:26, Keith Packard wrote:
> On Mon, 2008-08-04 at 19:02 +1000, Nick Piggin wrote:
> > This is how I'd suggested it work as well. I think a little bit
> > more effort should be spent looking at making this work.
>
> What I may be able to do is create a file, then hand it to my driver and
> close the fd. That would avoid any ulimit or low-fd issues.
Sure.
> > Mapping the file into an address space might be a way to make it
> > work (using get_user_pages to get the struct page). splice might
> > also work. read_mapping_page or similar could also be something to
> > look at. But using shmem_getpage seems wrong because it circumvents
> > the vfs API.
>
> It seems fairly ugly to map the object to user space just to get page
> pointers; the expense of constructing that mapping will be entirely
> wasted most of the time.
True. It would make it possible for the userspace program to pass in
anonymous pages, but maybe not a big deal if you're using files and
shmem based management.
> Would it be imprudent to use pagecache_write_begin/pagecache_write_end
> here? For shmem, that appears to point at functions which will do what I
> need. Of course, it will cause extra page-outs as each page will be
> marked dirty, even if the GPU never writes them.
>
> While shmem offers good semantics for graphics objects, it doesn't seem
> like it is unique in any way, and it seems like it should be possible to
> do this operation on any file system.
pagecache_write_begin/pagecache_write_end should be reasonable, but you
have to be careful of the semantics of it. For example, you can't really
read anything from the page inside the calls because the filesystem may
not bring it up to date.
read_mapping_page might help there.
> > If you genuinely have problems that can't be fit into existing
> > APIs without significant modification, and that is specific just to
> > your app, then we could always look at making special cases for you.
> > But it would be nice if we generically solve problems you have with
> > processes manipulating thousands of files.
>
> There are some unique aspects to this operation which don't really have
> parallels in other environments.
>
> I'm doing memory management for a co-processor which uses the same pages
> as the CPU. So, I need to allocate many pages that are just handed to
> the GPU and never used by the CPU at all. Most rendering buffers are of
> this form -- if you ever need to access them from the CPU, you've done
> something terribly wrong.
>
> Then there are textures which are constructed by the CPU (usually) and
> handed over to the GPU for the entire lifetime of the application. These
> are numerous enough that we need to be able to page them to disk; the
> kernel driver will fault them back in when the GPU needs them again.
>
> On the other hand, there are command and vertex buffers which are
> constructed in user space and passed to the GPU for execution. These
> operate just like any bulk-data transfer, and, in fact, I'm using the
> pwrite API to transmit this data. For these buffers, the entire key is
> to make sure you respect the various caches to keep them from getting
> trashed.
Right, that's your specific implementation, but for some cases the
memory management can map or be implemented using generic primitives.
Using pagecache for your memory for example should work nicely. Making
it shmem specific and using internal APIs seems like a negative step
until you really have a good reason to.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists