lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160718235638.GY3078@mtj.duckdns.org>
Date:	Mon, 18 Jul 2016 19:56:38 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Bhaktipriya Shridhar <bhaktipriya96@...il.com>
Cc:	Mauro Carvalho Chehab <mchehab@....samsung.com>,
	Geunyoung Kim <nenggun.kim@...sung.com>,
	Junghak Sung <jh1009.sung@...sung.com>,
	Hans Verkuil <hans.verkuil@...co.com>,
	Inki Dae <inki.dae@...sung.com>, linux-media@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] [media] cx25821: Remove deprecated
 create_singlethread_workqueue

On Sat, Jul 16, 2016 at 02:43:20PM +0530, Bhaktipriya Shridhar wrote:
> The workqueue "_irq_audio_queues" runs the audio upstream handler.
> It has a single work item(&dev->_audio_work_entry) and hence doesn't
> require ordering. Also, it is not being used on a memory reclaim path.
> Hence, the singlethreaded workqueue has been replaced with the use of
> system_wq.
> 
> System workqueues have been able to handle high level of concurrency
> for a long time now and hence it's not required to have a singlethreaded
> workqueue just to gain concurrency. Unlike a dedicated per-cpu workqueue
> created with create_singlethread_workqueue(), system_wq allows multiple
> work items to overlap executions even on the same CPU; however, a
> per-cpu workqueue doesn't have any CPU locality or global ordering
> guarantee unless the target CPU is explicitly specified and thus the
> increase of local concurrency shouldn't make any difference.

The patch seems to be missing update to wq destruction path.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ