lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 16 Jan 2015 01:19:08 -0500 (EST) From: David Miller <davem@...emloft.net> To: ying.xue@...driver.com Cc: tgraf@...g.ch, sergei.shtylyov@...entembedded.com, netdev@...r.kernel.org Subject: Re: [PATCH net-next v5] rhashtable: Fix race in rhashtable_destroy() and use regular work_struct From: Ying Xue <ying.xue@...driver.com> Date: Fri, 16 Jan 2015 11:13:09 +0800 > When we put our declared work task in the global workqueue with > schedule_delayed_work(), its delay parameter is always zero. > Therefore, we should define a regular work in rhashtable structure > instead of a delayed work. > > By the way, we add a condition to check whether resizing functions > are NULL before cancelling the work, avoiding to cancel an > uninitialized work. > > Lastly, while we wait for all work items we submitted before to run > to completion with cancel_delayed_work(), ht->mutex has been taken in > rhashtable_destroy(). Moreover, cancel_delayed_work() doesn't return > until all work items are accomplished, and when work items are > scheduled, the work's function - rht_deferred_worker() will be called. > However, as rht_deferred_worker() also needs to acquire the lock, > deadlock might happen at the moment as the lock is already held before. > So if the cancel work function is moved out of the lock covered scope, > this will avoid the deadlock. > > Fixes: 97defe1 ("rhashtable: Per bucket locks & deferred expansion/shrinking") > Signed-off-by: Ying Xue <ying.xue@...driver.com> > Cc: Thomas Graf <tgraf@...g.ch> > Acked-by: Thomas Graf <tgraf@...g.ch> Applied, thanks. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists