lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080712153234.GA603@tv-sign.ru>
Date:	Sat, 12 Jul 2008 19:32:34 +0400
From:	Oleg Nesterov <oleg@...sign.ru>
To:	akpm@...ux-foundation.org
Cc:	linux-kernel@...r.kernel.org, rui.zhang@...el.com,
	harbour@...nx.od.ua, pavel@....cz, rjw@...k.pl
Subject: [PATCH] pm-introduce-new-interfaces-schedule_work_on-and-queue_work_on-cleanup

On 07/11, Andrew Morton wrote:
>
> +queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)
> +{
> +	int ret = 0;
> +
> +	if (!test_and_set_bit(WORK_STRUCT_PENDING, work_data_bits(work))) {
> +		BUG_ON(!list_empty(&work->entry));
> +		preempt_disable();
> +		__queue_work(wq_per_cpu(wq, cpu), work);
> +		preempt_enable();

The comment above __queue_work() is wrong, we don't need to disable
preemtion.

What it actually means is: the caller of __queue_work() must ensure
we can't race with CPU_DEAD, but preempt_disable() can't help for
queue_work_on(). CPU can die even before preempt_disable().

Remove preempt_disable() and update the comment.

Signed-off-by: Oleg Nesterov <oleg@...sign.ru>

--- 26-rc2/kernel/workqueue.c~WQ_1_QWON_CLEANUP	2008-07-12 19:04:48.000000000 +0400
+++ 26-rc2/kernel/workqueue.c	2008-07-12 19:11:39.000000000 +0400
@@ -137,7 +137,6 @@ static void insert_work(struct cpu_workq
 	wake_up(&cwq->more_work);
 }
 
-/* Preempt must be disabled. */
 static void __queue_work(struct cpu_workqueue_struct *cwq,
 			 struct work_struct *work)
 {
@@ -180,7 +179,8 @@ EXPORT_SYMBOL_GPL(queue_work);
  *
  * Returns 0 if @work was already on a queue, non-zero otherwise.
  *
- * We queue the work to a specific CPU
+ * We queue the work to a specific CPU, the caller must ensure it
+ * can't go away.
  */
 int
 queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)
@@ -189,9 +189,7 @@ queue_work_on(int cpu, struct workqueue_
 
 	if (!test_and_set_bit(WORK_STRUCT_PENDING, work_data_bits(work))) {
 		BUG_ON(!list_empty(&work->entry));
-		preempt_disable();
 		__queue_work(wq_per_cpu(wq, cpu), work);
-		preempt_enable();
 		ret = 1;
 	}
 	return ret;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ