lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 10 Aug 2008 23:03:17 -0400
From:	Keith Packard <keithp@...thp.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	keithp@...thp.com, Eric Anholt <eric@...olt.net>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Export shmem_file_setup for DRM-GEM

On Sun, 2008-08-10 at 21:23 -0400, Christoph Hellwig wrote:

> shmfs is perfectly happy to have thousands of files, there are workloads
> that have a lot more than that.  Note that in other subthreads there is
> the assumption that all of them are open and in the fd table at the same
> time.  I haven't really seen a good explanation for that - objects on
> shmfs are persistant until deleted, and when you keep your parent
> directory fd open an openat to get at the actual object is cheap.

A typical GPU-intensive application will have a few thousand objects
full of GPU-bound data containing GPU commands, vertices, textures,
rendering buffers, constants and other misc GPU state.

When rendering a typical frame, all of these objects will be referenced
in turn -- each frame is much like the last, so the process of drawing
the frame ends up using almost exactly the same data each time. It's an
animation after all.

Any kind of resource constraint places severe pressure on an eviction
policy -- the obvious 'LRU' turns out to be almost exactly wrong as you
always eject the objects which are next in line for re-use. Obviously,
the right solution is to avoid artificial resource constraints, and
minimize the impact of real constraints.

Limiting the number of objects we can have in-use at a time (by limiting
the number of open FDs) is an artificial resource constraint, forcing
re-use of closed objects to pay an extra openat cost. That may be cheap,
but it's not free.

Plus, we don't want persistent objects here -- when the application
exits, we want the objects to disappear automatically; otherwise you'll
clutter shmem with thousands of spurious files when an application
crashes. So, if we create shmem files, we'll want to unlink them
immediately, leaving only the fd in place to refer to them.

Our alternative is to open the shmem file and pass the fd into our
driver to hold onto the file while the application closes the fd.
I see that as simply a kludge to work around the absence of anonymous
pageable file allocation -- it seems like an abuse of the current API.

Perhaps we should construct a different API to allocate a pageable
object? Having shmem expose these as 'files' doesn't make a lot of sense
in this context, it just happens to be an easy way to plug them into the
vm system, as is evidenced by ipc/shm using them as well.

-- 
keith.packard@...el.com

Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ