lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 28 Mar 2009 15:22:00 +1000
From:	Dave Airlie <airlied@...il.com>
To:	Jesse Barnes <jbarnes@...tuousgeek.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Eric Anholt <eric@...olt.net>, linux-kernel@...r.kernel.org,
	dri-devel@...ts.sourceforge.net
Subject: Re: [PATCH 1/6] drm/i915: Fix lock order reversal in GTT pwrite path.

On Sat, Mar 28, 2009 at 12:35 PM, Jesse Barnes <jbarnes@...tuousgeek.org> wrote:
> On Sat, 28 Mar 2009 01:54:32 +0100
> Peter Zijlstra <peterz@...radead.org> wrote:
>
>> On Thu, 2009-03-26 at 17:43 -0700, Jesse Barnes wrote:
>> > On Wed, 25 Mar 2009 14:45:05 -0700
>> > Eric Anholt <eric@...olt.net> wrote:
>> >
>> > > Since the pagefault path determines that the lock order we use
>> > > has to be mmap_sem -> struct_mutex, we can't allow page faults to
>> > > occur while the struct_mutex is held.  To fix this in pwrite, we
>> > > first try optimistically to see if we can copy from user without
>> > > faulting.  If it fails, fall back to using get_user_pages to pin
>> > > the user's memory, and map those pages atomically when copying it
>> > > to the GPU.
>> > >
>> > > Signed-off-by: Eric Anholt <eric@...olt.net>
>> > > ---
>> > > + /* Pin the user pages containing the data.  We can't
>> > > fault while
>> > > +  * holding the struct mutex, and all of the pwrite
>> > > implementations
>> > > +  * want to hold it while dereferencing the user data.
>> > > +  */
>> > > + first_data_page = data_ptr / PAGE_SIZE;
>> > > + last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE;
>> > > + num_pages = last_data_page - first_data_page + 1;
>> > > +
>> > > + user_pages = kcalloc(num_pages, sizeof(struct page *),
>> > > GFP_KERNEL);
>> > > + if (user_pages == NULL)
>> > > +         return -ENOMEM;
>> >
>> > If kmalloc limits us to a 128k allocation (and maybe less under
>> > pressure), then we'll be limited to 128k/8 page pointers on 64 bit,
>> > or 64M per pwrite...  Is that ok?  Or do we need to handle multiple
>> > passes here?
>>
>> While officially supported, a 128k kmalloc is _very_ likely to fail,
>> it would require an order 5 page allocation to back that, and that is
>> well outside of comfortable.
>
> Yeah, my "and maybe less" could have been worded a tad more strongly. ;)
> Do we have stats on which kmalloc buckets have available allocations
> anywhere for machines under various workloads?  I know under heavy
> pressure even 8k allocations can fail, but since this is a GFP_KERNEL
> things should be a *little* better.

You might want to check what TTM did, it had a vmalloc fallback.

Another issue that might need checking in GEM was whether we can from unpriv
userspace cause the kernel to do unbounded small kmallocs and keep
them alive either via object references or anything like that.

Dave.
>
> --
> Jesse Barnes, Intel Open Source Technology Center
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ