[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171030152200.ayfnewoqkxbuk4zh@hirez.programming.kicks-ass.net>
Date: Mon, 30 Oct 2017 16:22:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Byungchul Park <byungchul.park@....com>
Cc: Michal Hocko <mhocko@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
syzbot
<bot+e7353c7141ff7cbb718e4c888a14fa92de41ebaa@...kaller.appspotmail.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
jglisse@...hat.com, LKML <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, shli@...com, syzkaller-bugs@...glegroups.com,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, ying.huang@...el.com,
kernel-team@....com
Subject: Re: possible deadlock in lru_add_drain_all
On Mon, Oct 30, 2017 at 04:10:09PM +0100, Peter Zijlstra wrote:
> I can indeed confirm it's running old code; cpuhp_state is no more.
>
> However, that splat translates like:
>
> __cpuhp_setup_state()
> #0 cpus_read_lock()
> __cpuhp_setup_state_cpuslocked()
> #1 mutex_lock(&cpuhp_state_mutex)
>
>
>
> __cpuhp_state_add_instance()
> #2 mutex_lock(&cpuhp_state_mutex)
> cpuhp_issue_call()
> cpuhp_invoke_ap_callback()
> #3 wait_for_completion()
>
> msr_device_create()
> ...
> #4 filename_create()
> #3 complete()
>
So all this you can get in a single callchain when you do something
shiny like:
modprobe msr
> do_splice()
> #4 file_start_write()
> do_splice_from()
> iter_file_splice_write()
> #5 pipe_lock()
> vfs_iter_write()
> ...
> #6 inode_lock()
>
>
This is a splice into a devtmpfs file
> sys_fcntl()
> do_fcntl()
> shmem_fcntl()
> #5 inode_lock()
#6 (obviously)
> shmem_wait_for_pins()
> if (!scan)
> lru_add_drain_all()
> #0 cpus_read_lock()
>
Is the right fcntl()
So 3 different callchains, and *splat*..
Powered by blists - more mailing lists