[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200403093612.mtd7edubsng24uuh@wittgenstein>
Date: Fri, 3 Apr 2020 11:36:12 +0200
From: Christian Brauner <christian.brauner@...ntu.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: syzbot <syzbot+f675f964019f884dbd0f@...kaller.appspotmail.com>,
adobriyan@...il.com, akpm@...ux-foundation.org,
allison@...utok.net, areber@...hat.com, aubrey.li@...ux.intel.com,
avagin@...il.com, bfields@...ldses.org, christian@...uner.io,
cyphar@...har.com, ebiederm@...ssion.com,
gregkh@...uxfoundation.org, guro@...com, jlayton@...nel.org,
joel@...lfernandes.org, keescook@...omium.org,
linmiaohe@...wei.com, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, mhocko@...e.com, mingo@...nel.org,
peterz@...radead.org, sargun@...gun.me,
syzkaller-bugs@...glegroups.com, tglx@...utronix.de,
viro@...iv.linux.org.uk
Subject: Re: possible deadlock in send_sigurg
On Fri, Apr 03, 2020 at 11:11:35AM +0200, Oleg Nesterov wrote:
> On 04/02, syzbot wrote:
> >
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> > _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> > spin_lock include/linux/spinlock.h:353 [inline]
> > proc_pid_make_inode+0x1f9/0x3c0 fs/proc/base.c:1880
>
> Yes, spin_lock(wait_pidfd.lock) is not safe...
>
> Eric, at first glance the fix is simple.
>
> Oleg.
>
>
> diff --git a/fs/proc/base.c b/fs/proc/base.c
Um, when did this lock get added to proc/base.c in the first place and
why has it been abused for this?
People just recently complained loudly about this in the
cred_guard_mutex thread that abusing locks for things they weren't
intended for is a bad idea...
> index 74f948a6b621..9ec8c114aa60 100644
> --- a/fs/proc/base.c
> +++ b/fs/proc/base.c
> @@ -1839,9 +1839,9 @@ void proc_pid_evict_inode(struct proc_inode *ei)
> struct pid *pid = ei->pid;
>
> if (S_ISDIR(ei->vfs_inode.i_mode)) {
> - spin_lock(&pid->wait_pidfd.lock);
> + spin_lock_irq(&pid->wait_pidfd.lock);
> hlist_del_init_rcu(&ei->sibling_inodes);
> - spin_unlock(&pid->wait_pidfd.lock);
> + spin_unlock_irq(&pid->wait_pidfd.lock);
> }
>
> put_pid(pid);
> @@ -1877,9 +1877,9 @@ struct inode *proc_pid_make_inode(struct super_block * sb,
> /* Let the pid remember us for quick removal */
> ei->pid = pid;
> if (S_ISDIR(mode)) {
> - spin_lock(&pid->wait_pidfd.lock);
> + spin_lock_irq(&pid->wait_pidfd.lock);
> hlist_add_head_rcu(&ei->sibling_inodes, &pid->inodes);
> - spin_unlock(&pid->wait_pidfd.lock);
> + spin_unlock_irq(&pid->wait_pidfd.lock);
> }
>
> task_dump_owner(task, 0, &inode->i_uid, &inode->i_gid);
> diff --git a/fs/proc/inode.c b/fs/proc/inode.c
> index 1e730ea1dcd6..6b7ee76e1b36 100644
> --- a/fs/proc/inode.c
> +++ b/fs/proc/inode.c
> @@ -123,9 +123,9 @@ void proc_invalidate_siblings_dcache(struct hlist_head *inodes, spinlock_t *lock
> if (!node)
> break;
> ei = hlist_entry(node, struct proc_inode, sibling_inodes);
> - spin_lock(lock);
> + spin_lock_irq(lock);
> hlist_del_init_rcu(&ei->sibling_inodes);
> - spin_unlock(lock);
> + spin_unlock_irq(lock);
>
> inode = &ei->vfs_inode;
> sb = inode->i_sb;
>
Powered by blists - more mailing lists