[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1238201672.4039.396.camel@laptop>
Date: Sat, 28 Mar 2009 01:54:32 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Jesse Barnes <jbarnes@...tuousgeek.org>
Cc: Eric Anholt <eric@...olt.net>, linux-kernel@...r.kernel.org,
dri-devel@...ts.sourceforge.net
Subject: Re: [PATCH 1/6] drm/i915: Fix lock order reversal in GTT pwrite
path.
On Thu, 2009-03-26 at 17:43 -0700, Jesse Barnes wrote:
> On Wed, 25 Mar 2009 14:45:05 -0700
> Eric Anholt <eric@...olt.net> wrote:
>
> > Since the pagefault path determines that the lock order we use has to
> > be mmap_sem -> struct_mutex, we can't allow page faults to occur
> > while the struct_mutex is held. To fix this in pwrite, we first try
> > optimistically to see if we can copy from user without faulting. If
> > it fails, fall back to using get_user_pages to pin the user's memory,
> > and map those pages atomically when copying it to the GPU.
> >
> > Signed-off-by: Eric Anholt <eric@...olt.net>
> > ---
> > + /* Pin the user pages containing the data. We can't fault
> > while
> > + * holding the struct mutex, and all of the pwrite
> > implementations
> > + * want to hold it while dereferencing the user data.
> > + */
> > + first_data_page = data_ptr / PAGE_SIZE;
> > + last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE;
> > + num_pages = last_data_page - first_data_page + 1;
> > +
> > + user_pages = kcalloc(num_pages, sizeof(struct page *),
> > GFP_KERNEL);
> > + if (user_pages == NULL)
> > + return -ENOMEM;
>
> If kmalloc limits us to a 128k allocation (and maybe less under
> pressure), then we'll be limited to 128k/8 page pointers on 64 bit, or
> 64M per pwrite... Is that ok? Or do we need to handle multiple passes
> here?
While officially supported, a 128k kmalloc is _very_ likely to fail, it
would require an order 5 page allocation to back that, and that is well
outside of comfortable.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists