[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5232288F.4070904@vmware.com>
Date: Thu, 12 Sep 2013 22:48:15 +0200
From: Thomas Hellstrom <thellstrom@...are.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: Daniel Vetter <daniel.vetter@...ll.ch>,
Peter Zijlstra <peterz@...radead.org>,
Dave Airlie <airlied@...ux.ie>,
Maarten Lankhorst <maarten.lankhorst@...onical.com>,
intel-gfx <intel-gfx@...ts.freedesktop.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE
On 09/12/2013 10:39 PM, Thomas Gleixner wrote:
> On Thu, 12 Sep 2013, Daniel Vetter wrote:
>
>> On Thu, Sep 12, 2013 at 10:20 PM, Thomas Gleixner <tglx@...utronix.de> wrote:
>>>> I think for ttm drivers it's just execbuf being exploitable. But on
>>>> drm/i915 we've
>>>> had the same issue with the pwrite/pread ioctls, so a simple
>>>> glBufferData(glMap) kind of recursion from gl clients blew the kernel
>>>> to pieces ...
>>> And the only answer you folks came up with is set_need_resched() and
>>> yield()? Oh well....
>> The yield was for a different lifelock, and that one is also fixed by
>> now. The fault handler deadlock was fixed in the usual "drop locks and
>> jump into slowpath" fasion, at least in drm/i915.
> So we can remove that whole yield/set_need_resched() mess completely ?
>
> Thanks,
>
> tglx
No.
The while(trylock) is there to address a potential locking inversion
deadlock. If the trylock fails, the code returns to user-space which
retries the fault. This code needs to stay until we can come up with
either a way to drop the mmap_sem and sleep before returning to
user-space, or a bunch of code is fixed with a different locking order.
The set_need_resched() can (and should according to Peter) go.
/Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists