lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Feb 2009 18:33:14 -0800
From:	Eric Anholt <eric@...olt.net>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Peter Zijlstra <peterz@...radead.org>, krh@...planet.net,
	Wang Chen <wangchen@...fujitsu.com>, dri-devel@...ts.sf.net,
	linux-kernel@...r.kernel.org,
	Kristian Høgsberg <krh@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hugh@...itas.com>
Subject: Re: [PATCH] drm: Take mmap_sem up front to avoid lock order
	violations.

On Thu, 2009-02-19 at 13:57 +0100, Nick Piggin wrote:
> On Thu, Feb 19, 2009 at 10:19:05AM +0100, Peter Zijlstra wrote:
> > On Wed, 2009-02-18 at 11:38 -0500, krh@...planet.net wrote:
> > > From: Kristian Høgsberg <krh@...hat.com>
> > > 
> > > A number of GEM operations (and legacy drm ones) want to copy data to
> > > or from userspace while holding the struct_mutex lock.  However, the
> > > fault handler calls us with the mmap_sem held and thus enforces the
> > > opposite locking order.  This patch downs the mmap_sem up front for
> > > those operations that access userspace data under the struct_mutex
> > > lock to ensure the locking order is consistent.
> > > 
> > > Signed-off-by: Kristian Høgsberg <krh@...hat.com>
> > > ---
> > > 
> > > Here's a different and simpler attempt to fix the locking order
> > > problem.  We can just down_read() the mmap_sem pre-emptively up-front,
> > > and the locking order is respected.  It's simpler than the
> > > mutex_trylock() game, avoids introducing a new mutex.
> 
> The "simple" way to fix this is to just allocate a temporary buffer
> to copy a snapshot of the data going to/from userspace. Then do the
> real usercopy to/from that buffer outside the locks.
> 
> You don't have any performance critical bulk copies (ie. that will
> blow the L1 cache), do you? 

16kb is the most common size (batchbuffers).  32k is popular on 915
(vertex), and varying between 0-128k on 965 (vertex).  The pwrite path
generally represents 10-30% of CPU consumption in CPU-bound apps.

-- 
Eric Anholt
eric@...olt.net                         eric.anholt@...el.com



Download attachment "signature.asc" of type "application/pgp-signature" (198 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ