[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20201002071022.GA5379@linux>
Date: Fri, 2 Oct 2020 12:40:22 +0530
From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
To: Doug Anderson <dianders@...omium.org>
Cc: David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Stephen Boyd <sboyd@...nel.org>
Subject: Re: [PATCH] net: qrtr: ns: Protect radix_tree_deref_slot() using rcu
read locks
Hi Doug,
On Thu, Oct 01, 2020 at 03:53:12PM -0700, Doug Anderson wrote:
> Hi,
>
> On Mon, Sep 28, 2020 at 4:15 PM David Miller <davem@...emloft.net> wrote:
> >
> > From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
> > Date: Sat, 26 Sep 2020 22:26:25 +0530
> >
> > > The rcu read locks are needed to avoid potential race condition while
> > > dereferencing radix tree from multiple threads. The issue was identified
> > > by syzbot. Below is the crash report:
> > ...
> > > Fixes: 0c2204a4ad71 ("net: qrtr: Migrate nameservice to kernel from userspace")
> > > Reported-and-tested-by: syzbot+0f84f6eed90503da72fc@...kaller.appspotmail.com
> > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
> >
> > Applied and queued up for -stable, thank you.
>
> The cure is worse than the disease. I tested by picking back to a
> 5.4-based kernel and got this crash. I expect the crash would also be
> present on mainline:
>
Thanks for the report! I intended to fix the issue reported by syzbot but
failed to notice the lock_sock() in qrtr_sendmsg() function. This function is
not supposed to be called while holding a lock as it might sleep.
I'll submit a patch to fix this issue asap.
Thanks,
Mani
> BUG: sleeping function called from invalid context at net/core/sock.c:3000
> in_atomic(): 0, irqs_disabled(): 0, non_block: 0, pid: 7, name: kworker/u16:0
> 3 locks held by kworker/u16:0/7:
> #0: ffffff81b65a7b28 ((wq_completion)qrtr_ns_handler){+.+.}, at:
> process_one_work+0x1bc/0x614
> #1: ffffff81b6edfd58 ((work_completion)(&qrtr_ns.work)){+.+.}, at:
> process_one_work+0x1e4/0x614
> #2: ffffffd01144c328 (rcu_read_lock){....}, at: rcu_lock_acquire+0x8/0x38
> CPU: 6 PID: 7 Comm: kworker/u16:0 Not tainted 5.4.68 #33
> Hardware name: Google Lazor (rev0) with LTE (DT)
> Workqueue: qrtr_ns_handler qrtr_ns_worker
> Call trace:
> dump_backtrace+0x0/0x158
> show_stack+0x20/0x2c
> dump_stack+0xdc/0x180
> ___might_sleep+0x1c0/0x1d0
> __might_sleep+0x50/0x88
> lock_sock_nested+0x34/0x94
> qrtr_sendmsg+0x7c/0x260
> sock_sendmsg+0x44/0x5c
> kernel_sendmsg+0x50/0x64
> lookup_notify+0xa8/0x118
> qrtr_ns_worker+0x8d8/0x1050
> process_one_work+0x338/0x614
> worker_thread+0x29c/0x46c
> kthread+0x150/0x160
> ret_from_fork+0x10/0x18
>
> I'll give the stack crawl from kgdb too since inlining makes things
> less obvious with the above...
>
> (gdb) bt
> #0 arch_kgdb_breakpoint ()
> at .../arch/arm64/include/asm/kgdb.h:21
> #1 kgdb_breakpoint ()
> at .../kernel/debug/debug_core.c:1183
> #2 0xffffffd010131058 in ___might_sleep (
> file=file@...ry=0xffffffd010efec42 "net/core/sock.c",
> line=line@...ry=3000, preempt_offset=preempt_offset@...ry=0)
> at .../kernel/sched/core.c:7994
> #3 0xffffffd010130ee0 in __might_sleep (
> file=0xffffffd010efec42 "net/core/sock.c", line=3000,
> preempt_offset=0)
> at .../kernel/sched/core.c:7965
> #4 0xffffffd01094d1c8 in lock_sock_nested (
> sk=sk@...ry=0xffffff8147e457c0, subclass=0)
> at .../net/core/sock.c:3000
> #5 0xffffffd010b26028 in lock_sock (sk=0xffffff8147e457c0)
> at .../include/net/sock.h:1536
> #6 qrtr_sendmsg (sock=0xffffff8148c4b240, msg=0xffffff81422afab8,
> len=20)
> at .../net/qrtr/qrtr.c:891
> #7 0xffffffd01093f8f4 in sock_sendmsg_nosec (
> sock=0xffffff8148c4b240, msg=0xffffff81422afab8)
> at .../net/socket.c:638
> #8 sock_sendmsg (sock=sock@...ry=0xffffff8148c4b240,
> msg=msg@...ry=0xffffff81422afab8)
> at .../net/socket.c:658
> #9 0xffffffd01093f95c in kernel_sendmsg (sock=0x1,
> msg=msg@...ry=0xffffff81422afab8, vec=<optimized out>,
> vec@...ry=0xffffff81422afaa8, num=<optimized out>, num@...ry=1,
> size=<optimized out>, size@...ry=20)
> at .../net/socket.c:678
> #10 0xffffffd010b28be0 in service_announce_new (
> dest=dest@...ry=0xffffff81422afc20,
> srv=srv@...ry=0xffffff81370f6380)
> at .../net/qrtr/ns.c:127
> #11 0xffffffd010b279f4 in announce_servers (sq=0xffffff81422afc20)
> at .../net/qrtr/ns.c:207
> #12 ctrl_cmd_hello (sq=0xffffff81422afc20)
> at .../net/qrtr/ns.c:328
> #13 qrtr_ns_worker (work=<optimized out>)
> at .../net/qrtr/ns.c:661
> #14 0xffffffd010119a94 in process_one_work (
> worker=worker@...ry=0xffffff8142267900,
> work=0xffffffd0128ddaf8 <qrtr_ns+48>)
> at .../kernel/workqueue.c:2272
> #15 0xffffffd01011a16c in worker_thread (
> __worker=__worker@...ry=0xffffff8142267900)
> at .../kernel/workqueue.c:2418
> #16 0xffffffd01011fb78 in kthread (_create=0xffffff8142269200)
> at .../kernel/kthread.c:268
> #17 0xffffffd01008645c in ret_from_fork ()
> at .../arch/arm64/kernel/entry.S:1169
>
>
> -Doug
Powered by blists - more mailing lists