[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191209130005.GB5388@redhat.com>
Date: Mon, 9 Dec 2019 14:00:06 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Miklos Szeredi <miklos@...redi.hu>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Felipe Balbi <balbi@...nel.org>
Subject: Re: [RFC PATCH] sched/wait: Make interruptible exclusive waitqueue
wakeups reliable
On 12/09, Ingo Molnar wrote:
>
> Any consumed exclusive event is handled in finish_wait_exclusive() now:
>
> + } else {
> + /* We got removed from the waitqueue already, wake up the next exclusive waiter (if any): */
> + if (interrupted && waitqueue_active(wq_head))
> + __wake_up_locked_key(wq_head, TASK_NORMAL, NULL);
See my previous email, I don't think we need this...
But if we do this, then __wake_up_locked_key(key => NULL) doesn't look right.
It should use the same "key" which was passed to __wake_up(key) which removed
us from list.
Currently this doesn't really matter, the only user of prepare_to_wait_event()
which relies on the "keyed" wakeup is ___wait_var_event() and it doesn't have
"exclusive" waiters, but still.
Hmm. and it seems that init_wait_var_entry() is buggy? Again, currently this
doesn't matter, but don't we need the trivial fix below?
Oleg.
--- x/kernel/sched/wait_bit.c
+++ x/kernel/sched/wait_bit.c
@@ -179,6 +179,7 @@ void init_wait_var_entry(struct wait_bit
.bit_nr = -1,
},
.wq_entry = {
+ .flags = flags,
.private = current,
.func = var_wake_function,
.entry = LIST_HEAD_INIT(wbq_entry->wq_entry.entry),
Powered by blists - more mailing lists