[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALYGNiOgSVgq+iaUs-f9MB4o8yOWb7jk6eEA=SMrqJh5K=6+hQ@mail.gmail.com>
Date: Mon, 9 Feb 2015 13:46:18 +0400
From: Konstantin Khlebnikov <koct9i@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: BUG: stuck on mmap_sem in 3.18.6
On Mon, Feb 9, 2015 at 12:36 PM, Vlastimil Babka <vbabka@...e.cz> wrote:
> On 02/09/2015 08:14 AM, Konstantin Khlebnikov wrote:
>> Python was running under ptrace-based sandbox "sydbox" used exherbo
>> chroot. Kernel: 3.18.6 + my patch "mm: prevent endless growth of
>> anon_vma hierarchy" (patch seems stable).
>>
>> [ 4674.087780] INFO: task python:25873 blocked for more than 120 seconds.
>> [ 4674.087793] Tainted: G U 3.18.6-zurg+ #158
>> [ 4674.087797] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>> [ 4674.087801] python D ffff88041e2d2000 14176 25873 25630 0x00000102
>> [ 4674.087817] ffff880286247b68 0000000000000086 ffff8803d5fe6b40
>> 0000000000012000
>> [ 4674.087824] ffff880286247fd8 0000000000012000 ffff88040c16eb40
>> ffff8803d5fe6b40
>> [ 4674.087830] 0000000300000003 ffff8803d5fe6b40 ffff880362888e78
>> ffff880362888e60
>> [ 4674.087836] Call Trace:
>> [ 4674.087854] [<ffffffff81696be9>] schedule+0x29/0x70
>> [ 4674.087865] [<ffffffff81699815>] rwsem_down_write_failed+0x1d5/0x2f0
>> [ 4674.087873] [<ffffffff812d4c73>] call_rwsem_down_write_failed+0x13/0x20
>> [ 4674.087881] [<ffffffff816990c1>] ? down_write+0x31/0x50
>> [ 4674.087891] [<ffffffff811f3b44>] do_coredump+0x144/0xee0
>> [ 4674.087900] [<ffffffff810b66f7>] ? pick_next_task_fair+0x397/0x450
>> [ 4674.087909] [<ffffffff810026a6>] ? __switch_to+0x1d6/0x5f0
>> [ 4674.087915] [<ffffffff816966e6>] ? __schedule+0x3a6/0x880
>> [ 4674.087924] [<ffffffff81690000>] ? klist_remove+0x40/0xd0
>> [ 4674.087932] [<ffffffff81093988>] get_signal+0x298/0x6b0
>> [ 4674.087940] [<ffffffff81003588>] do_signal+0x28/0xbb0
>> [ 4674.087946] [<ffffffff8109276d>] ? do_send_sig_info+0x5d/0x80
>> [ 4674.087955] [<ffffffff81004179>] do_notify_resume+0x69/0xb0
>> [ 4674.087963] [<ffffffff8169b028>] int_signal+0x12/0x17
>>
>> Maybe this guy did something wrong?
>
> Well he has do_coredump on stack, so he did something wrong in userspace? But
> here he's just waiting on down_write. Unless there's some bug in do_coredump
> that would lock for read and then for write, without an unlock in between?
I mean khugepaged. This code looks really messy. Maybe it already has
mmap_sem locked for read and tries to lock it again:
[ 5153.460186] INFO: task khugepaged:262 blocked for more than 120 seconds.
[ 5153.460198] Tainted: G U 3.18.6-zurg+ #158
[ 5153.460201] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 5153.460206] khugepaged D ffff88041e292000 14496 262 2 0x00000000
[ 5153.460220] ffff88040b99bcb0 0000000000000046 ffff88040b994a40
0000000000012000
[ 5153.460227] ffff88040b99bfd8 0000000000012000 ffff88040c16e300
ffff88040b994a40
[ 5153.460233] ffffffff810d5c1b ffff88040b994a40 ffff880362888e60
ffffffffffffffff
[ 5153.460240] Call Trace:
[ 5153.460255] [<ffffffff810d5c1b>] ? lock_timer_base.isra.41+0x2b/0x50
[ 5153.460264] [<ffffffff81696be9>] schedule+0x29/0x70
[ 5153.460272] [<ffffffff81699a05>] rwsem_down_read_failed+0xd5/0x120
[ 5153.460280] [<ffffffff812d4c44>] call_rwsem_down_read_failed+0x14/0x30
[ 5153.460287] [<ffffffff81699084>] ? down_read+0x24/0x30
[ 5153.460297] [<ffffffff81191221>] khugepaged+0x381/0x13f0
[ 5153.460309] [<ffffffff810bb400>] ? abort_exclusive_wait+0xb0/0xb0
[ 5153.460316] [<ffffffff81190ea0>] ? maybe_pmd_mkwrite+0x30/0x30
[ 5153.460325] [<ffffffff810a217b>] kthread+0xdb/0x100
[ 5153.460332] [<ffffffff810a20a0>] ? kthread_create_on_node+0x170/0x170
[ 5153.460340] [<ffffffff8169acfc>] ret_from_fork+0x7c/0xb0
[ 5153.460347] [<ffffffff810a20a0>] ? kthread_create_on_node+0x170/0x170
>
>> Looks like mmap_sem is locked for read:
>
> So we have the python waiting for write, blocking all new readers (that's how
> read/write locks work, right?), but itself waiting for a prior reader to finish.
> The question is, who is/was the reader? You could search the mmap_sem or mm
> address in the rest of the processes' stacks, and maybe you'll find him?
>
I haven't found anything suspicious. Kernel was build without any
debug so it's hard to tell who holds mmap_sem locked, maybe that task
already exited.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists