[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <87va6bwlfg.fsf@linux.ibm.com>
Date: Tue, 09 Oct 2018 14:23:23 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
syzbot+1577fbe983d20fe2e88f@...kaller.appspotmail.com
Cc: David Miller <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Alexey Kuznetsov <kuznet@....inr.ac.ru>,
LKML <linux-kernel@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
syzkaller-bugs@...glegroups.com,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
Andrew Morton <akpm@...ux-foundation.org>,
kirill.shutemov@...ux.intel.com
Subject: Re: general protection fault in __handle_mm_fault
Willem de Bruijn <willemdebruijn.kernel@...il.com> writes:
> On Mon, Oct 8, 2018 at 12:10 PM Willem de Bruijn
> <willemdebruijn.kernel@...il.com> wrote:
>>
>> On Fri, Oct 5, 2018 at 6:27 PM syzbot
>> <syzbot+1577fbe983d20fe2e88f@...kaller.appspotmail.com> wrote:
>> >
>> > Hello,
>> >
>> > syzbot found the following crash on:
>> >
>> > HEAD commit: 25bcda3e8b9f Add linux-next specific files for 20181004
>> > git tree: linux-next
>> > console output: https://syzkaller.appspot.com/x/log.txt?x=130e3bf1400000
>> > kernel config: https://syzkaller.appspot.com/x/.config?x=603d7f9140c3368a
>> > dashboard link: https://syzkaller.appspot.com/bug?extid=1577fbe983d20fe2e88f
>> > compiler: gcc (GCC) 8.0.1 20180413 (experimental)
>> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=127e88d6400000
>> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=13cdb67e400000
>>
>> > RIP: 0010:copy_user_enhanced_fast_string+0xe/0x20
>> > arch/x86/lib/copy_user_64.S:180
>> > Code: 89 d1 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 31 c0 0f 1f 00 c3 0f 1f
>> > 80 00 00 00 00 0f 1f 00 83 fa 40 0f 82 70 ff ff ff 89 d1 <f3> a4 31 c0 0f
>> > 1f 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 83
>> > RSP: 0018:ffff8801bbe675b8 EFLAGS: 00010202
>> > RAX: 0000000000000000 RBX: 0000000000007a50 RCX: 0000000000001b40
>> > RDX: 0000000000007a50 RSI: 0000000020077000 RDI: ffff8801ce615f10
>> > RBP: ffff8801bbe675f0 R08: ffffed0039cc2f4a R09: ffffed0039cc2f4a
>> > R10: ffffed0039cc2f49 R11: ffff8801ce617a4f R12: 0000000020078b40
>> > R13: 00000000200710f0 R14: ffff8801ce610000 R15: 00007ffffffff000
>> > _copy_from_iter_full+0x263/0xc20 lib/iov_iter.c:724
>> > copy_from_iter_full include/linux/uio.h:124 [inline]
>> > skb_do_copy_data_nocache include/net/sock.h:1951 [inline]
>> > skb_copy_to_page_nocache include/net/sock.h:1977 [inline]
>> > tcp_sendmsg_locked+0x159e/0x3f90 net/ipv4/tcp.c:1338
>>
>> This started on next-20181004. It still happens as of next-20181008.
>>
>> It does not trigger on next 20181003. It does not occur if
>> CONFIG_DEBUG_KOBJECT is disabled.
>
> Bisected to commit e4d0c281a4c9 ("mm/memory.c: recheck page table
> entry with page table lock held").
>
> Verified to not trigger on next-20181008 after reverting that commit.
Can you check with this patch
diff --git a/mm/memory.c b/mm/memory.c
index fa8894c70575..15c417e8e31d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3505,14 +3505,17 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND
*/
if (!vma->vm_ops->fault) {
-
/*
- * pmd entries won't be marked none during a R/M/W cycle.
+ * If we find a migration pmd entry or a none pmd entry, which
+ * should never happen, return SIGBUS
*/
- if (unlikely(pmd_none(*vmf->pmd)))
+ if (unlikely(!pmd_present(*vmf->pmd)))
ret = VM_FAULT_SIGBUS;
else {
- vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm,
+ vmf->pmd,
+ vmf->address,
+ &vmf->ptl);
/*
* Make sure this is not a temporary clearing of pte
* by holding ptl and checking again. A R/M/W update
@@ -3520,12 +3523,12 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
* we don't have concurrent modification by hardware
* followed by an update.
*/
- spin_lock(vmf->ptl);
if (unlikely(pte_none(*vmf->pte)))
ret = VM_FAULT_SIGBUS;
else
ret = VM_FAULT_NOPAGE;
- spin_unlock(vmf->ptl);
+
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
}
} else if (!(vmf->flags & FAULT_FLAG_WRITE))
ret = do_read_fault(vmf);
Powered by blists - more mailing lists