[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMSQXEGfQE_vcvk9MvP-aJD2jeF7vU+84yx_fEn==v2Jve9w0A@mail.gmail.com>
Date: Wed, 21 Nov 2012 14:03:41 +0100
From: Ivo Sieben <meltedpianoman@...il.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: linux-kernel@...r.kernel.org, Andi Kleen <andi@...stfloor.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-serial@...r.kernel.org,
Alan Cox <alan@...ux.intel.com>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [REPOST-v2] sched: Prevent wakeup to enter critical section needlessly
Hi
2012/11/19 Oleg Nesterov <oleg@...hat.com>:
>
> Because on a second thought I suspect this change is wrong.
>
> Just for example, please look at kauditd_thread(). It does
>
> set_current_state(TASK_INTERRUPTIBLE);
>
> add_wait_queue(&kauditd_wait, &wait);
>
> if (!CONDITION) // <-- LOAD
> schedule();
>
> And the last LOAD can leak into the critical section protected by
> wait_queue_head_t->lock, and it can be reordered with list_add()
> inside this critical section. In this case we can race with wake_up()
> unless it takes the same lock.
>
> Oleg.
>
I agree that I should solve my problem using the waitqueue_active()
function locally. I'll abandon this patch and fix it in the
tty_ldisc.c.
But we try to understand your fault scenario: How can the LOAD leak
into the critical section? As far as we understand the spin_unlock()
function also contains a memory barrier to prevent such a reordering
from happening.
Regards,
Ivo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists