lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CB2E24E.3020209@kernel.org>
Date:	Mon, 11 Oct 2010 12:09:18 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Milan Broz <mbroz@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	device-mapper development <dm-devel@...hat.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	just.for.lkml@...glemail.com, hch@...radead.org,
	herbert@...dor.hengli.com.au
Subject: [PATCH wq#for-next] workqueue: fix HIGHPRI handling in keep_working()

The policy function keep_working() didn't check GCWQ_HIGHPRI_PENDING
and could return %false with highpri work pending.  This could lead to
late execution of a highpri work which was delayed due to @max_active
throttling if other works are actively consuming CPU cycles.

For example, the following could happen.

1. Work W0 which burns CPU cycles.

2. Two works W1 and W2 are queued to a highpri wq w/ @max_active of 1.

3. W1 starts executing and W2 is put to delayed queue.  W0 and W1 are
   both runnable.

4. W1 finishes which puts W2 to pending queue but keep_working()
   incorrectly returns %false and the worker goes to sleep.

5. W0 finishes and W2 starts execution.

With this patch applied, W2 starts execution as soon as W1 finishes.

Signed-off-by: Tejun Heo <tj@...nel.org>
---
This is the workqueue bug I've found while trying to debug the dm/raid
hang.  Although the bug may introduce unexpected delay in scheduling a
highpri work, the delay can only be as long as the combined length of
CPU cycle burns of the already running works.  Given that HIGHPRI is
currently only used by xfs and its usage, I don't think it's likely to
cause an actual issue.  I'll queue it for #for-next.

Thank you.

 kernel/workqueue.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f77afd9..d355278 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -604,7 +604,9 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);

-	return !list_empty(&gcwq->worklist) && atomic_read(nr_running) <= 1;
+	return !list_empty(&gcwq->worklist) &&
+		(atomic_read(nr_running) <= 1 ||
+		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }

 /* Do we need a new worker?  Called from manager. */
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ