[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090413201306.GA16653@redhat.com>
Date: Mon, 13 Apr 2009 22:13:06 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
David Howells <dhowells@...hat.com>,
David Miller <davem@...emloft.net>
Cc: Serge Hallyn <serue@...ibm.com>, Steve Dickson <steved@...hat.com>,
Trond Myklebust <Trond.Myklebust@...app.com>,
Al Viro <viro@...iv.linux.org.uk>,
Daire Byrne <Daire.Byrne@...mestore.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH] slow_work_execute() needs mb() before
test_bit(SLOW_WORK_PENDING)
slow_work_execute:
clear_bit_unlock(SLOW_WORK_EXECUTING, &work->flags);
if (test_bit(SLOW_WORK_PENDING, &work->flags) {
clear_bit_unlock() implies release semantics, iow we have a one-way barrier
before clear_bit(). But we need the mb() semantics after clear_bit(), before
we test SLOW_WORK_PENDING. Otherwise we can miss SLOW_WORK_ENQ_DEFERRED if
we race slow_work_enqueue().
Signed-off-by: Oleg Nesterov <oleg@...hat.com>
--- 6.30/kernel/slow-work.c~2_BITS_MB 2009-04-13 19:40:20.000000000 +0200
+++ 6.30/kernel/slow-work.c 2009-04-13 21:19:33.000000000 +0200
@@ -198,7 +198,8 @@ static bool slow_work_execute(void)
if (very_slow)
atomic_dec(&vslow_work_executing_count);
- clear_bit_unlock(SLOW_WORK_EXECUTING, &work->flags);
+ clear_bit(SLOW_WORK_EXECUTING, &work->flags);
+ smp_mb__after_clear_bit();
/* if someone tried to enqueue the item whilst we were executing it,
* then it'll be left unenqueued to avoid multiple threads trying to
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists