[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <4D426D8B.3050708@huawei.com>
Date: Fri, 28 Jan 2011 15:17:31 +0800
From: Xiaowei Yang <xiaowei.yang@...wei.com>
To: Nick Piggin <npiggin@...il.com>
Cc: Jan Beulich <JBeulich@...ell.com>, Nick Piggin <npiggin@...nel.dk>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
fanhenglong@...wei.com, Kaushik Barde <kbarde@...wei.com>,
Kenneth Lee <liguozhu@...wei.com>,
linqaingmin <linqiangmin@...wei.com>, wangzhenguo@...wei.com,
Wu Fengguang <fengguang.wu@...el.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
linux-kernel@...r.kernel.org
Subject: Re: One (possible) x86 get_user_pages bug
On 2011-1-28 5:24, Nick Piggin wrote:
> On Fri, Jan 28, 2011 at 3:07 AM, Jan Beulich<JBeulich@...ell.com> wrote:
>>>>> On 27.01.11 at 14:05, Xiaowei Yang<xiaowei.yang@...wei.com> wrote:
>>> We created a scenario to reproduce the bug:
>>> ----------------------------------------------------------------
>>> // proc1/proc1.2 are 2 threads sharing one page table.
>>> // proc1 is the parent of proc2.
>>>
>>> proc1 proc2 proc1.2
>>> ... ... // in gup_pte_range()
>>> ... ... pte = gup_get_pte()
>>> ... ... page1 = pte_page(pte) // (1)
>>> do_wp_page(page1) ... ...
>>> ... exit_map() ...
>>> ... ... get_page(page1) // (2)
>>> -----------------------------------------------------------------
>>>
>>> do_wp_page() and exit_map() cause page1 to be released into free list
>>> before get_page() in proc1.2 is called. The longer the delay between
>>> (1)&(2), the easier the BUG_ON shows.
>>
>> Other than responded initially, I don't this can happen outside
>> of Xen: do_wp_page() won't reach page_cache_release() when
>> gup_pte_range() is running for the same mm on another CPU,
>> since it can't get past ptep_clear_flush() (waiting for the CPU
>> in get_user_pages_fast() to re-enable interrupts).
>
> Yeah, this cannot happen on native.
>
>
>>> An experimental patch is made to prevent the PTE being modified in the
>>> middle of gup_pte_range(). The BUG_ON disappears afterward.
>>>
>>> However, from the comments embedded in gup.c, it seems deliberate to
>>> avoid the lock in the fast path. The question is: if so, how to avoid
>>> the above scenario?
>>
>> Nick, based on your doing of the initial implementation, would
>> you be able to estimate whether disabling get_user_pages_fast()
>> altogether for Xen would be performing measurably worse than
>> adding the locks (but continuing to avoid acquiring mm->mmap_sem)
>> as suggested by Xiaowei? That's of course only if the latter is correct
>> at all, of which I haven't fully convinced myself yet.
>
> You must have some way to guarantee existence of Linux page
> tables when you walk them in order to resolve a TLB refill.
>
> x86 does this with IPI and hardware fill that is atomic WRT interrupts.
> So fast gup can disable interrupts to walk page tables, I don't think it
> is fragile it is absolutely tied to the system ISA (of course that can
> change, but as Peter said, other things will have to change).
>
> Other architectures use RCU for this, so fast gup uses a lockless-
> pagecache-alike protcol for that.
>
> If Xen is not using IPIs for flush, it should use whatever locks or
> synchronization its TLB refill is using.
Thanks everyone! It's very clear now that the problem only occurs on Xen
PV kernel which doesn't use IPI to flush TLB so lacks the implicit sync
mechanism. For now we can disable the fast path as a temp solution until
a better one comes up -- comparing to adding extra locks, the slow path
may not be bad, and actually get_user_pages_fast() is not called that
much in our environment.
However, this issue may raise another concern: could there be other
places inside Xen PV kernel which has the same sync problem?
Thanks,
Xiaowei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists