[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1812101824070.1667@nanos.tec.linutronix.de>
Date: Mon, 10 Dec 2018 18:43:51 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
cc: LKML <linux-kernel@...r.kernel.org>,
Stefan Liebler <stli@...ux.ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Darren Hart <dvhart@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [patch] futex: Cure exit race
On Mon, 10 Dec 2018, Peter Zijlstra wrote:
> On Mon, Dec 10, 2018 at 04:23:06PM +0100, Thomas Gleixner wrote:
> There is another callers of futex_lock_pi_atomic(),
> futex_proxy_trylock_atomic(), which is part of futex_requeue(), that too
> does a retry loop on -EAGAIN.
>
> And there is another caller of attach_to_pi_owner(): lookup_pi_state(),
> and that too is in futex_requeue() and handles the retry case properly.
>
> Yes, this all looks good.
>
> Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Bah. The little devil in the unconcious part of my brain insisted on
thinking further about that EAGAIN loop even despite my attempt to page
that futex horrors out again immediately after sending that patch.
There is another related issue which is even worse than just mildly
confusing user space:
task1(SCHED_OTHER)
sys_exit()
do_exit()
exit_mm()
task1->flags |= PF_EXITING;
---> preemption
task2(SCHED_FIFO)
sys_futex(LOCK_PI)
....
attach_to_pi_owner() {
...
if (!task1->flags & PF_EXITING) {
attach();
} else {
if (!(tsk->flags & PF_EXITPIDONE))
return -EAGAIN;
Now assume UP or both tasks pinned on the same CPU. That results in a
livelock because task2 is going to loop forever.
No immediate idea how to cure that one w/o creating a mess.
Thanks,
tglx
Powered by blists - more mailing lists