[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20180829.180516.859582570374575869.davem@davemloft.net>
Date: Wed, 29 Aug 2018 18:05:16 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: xiyou.wangcong@...il.com
Cc: netdev@...r.kernel.org, tipc-discussion@...ts.sourceforge.net,
jon.maloy@...csson.com, ying.xue@...driver.com
Subject: Re: [Patch net] tipc: switch to rhashtable iterator
From: Cong Wang <xiyou.wangcong@...il.com>
Date: Fri, 24 Aug 2018 12:28:06 -0700
> syzbot reported a use-after-free in tipc_group_fill_sock_diag(),
> where tipc_group_fill_sock_diag() still reads tsk->group meanwhile
> tipc_group_delete() just deletes it in tipc_release().
>
> tipc_nl_sk_walk() aims to lock this sock when walking each sock
> in the hash table to close race conditions with sock changes like
> this one, by acquiring tsk->sk.sk_lock.slock spinlock, unfortunately
> this doesn't work at all. All non-BH call path should take
> lock_sock() instead to make it work.
>
> tipc_nl_sk_walk() brutally iterates with raw rht_for_each_entry_rcu()
> where RCU read lock is required, this is the reason why lock_sock()
> can't be taken on this path. This could be resolved by switching to
> rhashtable iterator API's, where taking a sleepable lock is possible.
> Also, the iterator API's are friendly for restartable calls like
> diag dump, the last position is remembered behind the scence,
> all we need to do here is saving the iterator into cb->args[].
>
> I tested this with parallel tipc diag dump and thousands of tipc
> socket creation and release, no crash or memory leak.
>
> Reported-by: syzbot+b9c8f3ab2994b7cd1625@...kaller.appspotmail.com
> Cc: Jon Maloy <jon.maloy@...csson.com>
> Cc: Ying Xue <ying.xue@...driver.com>
> Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
Applied and queued up for -stable, thanks Cong.
Powered by blists - more mailing lists