[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210727221950.GA51120@zipoli.concurrent-rt.com>
Date: Tue, 27 Jul 2021 18:19:50 -0400
From: Joe Korty <joe.korty@...current-rt.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Lee Jones <lee.jones@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
[ Added missing people to the cc: as listed in MAINTAINERS ]
On Thu, Jul 22, 2021 at 04:11:41PM +0200, Greg Kroah-Hartman wrote:
> On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> > [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
> >
> > [ replicator, attached ]
> > [ workaround patch that crudely clears the loop, attached ]
> > [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
> >
> > When a certain, secure-site application is run on 4.4.262, it locks up and
> > is unkillable. Crash(8) and sysrq backtraces show that the application
> > is looping in the kernel in futex_unlock_pi.
> >
> > Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
> >
> > 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
> >
> > This patch has the following comment:
> >
> > The only problem is that this breaks RT timeliness guarantees. That
> > is, consider the following scenario:
> >
> > T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
> >
> > CPU0
> >
> > T1
> > lock_pi()
> > queue_me() <- Waiter is visible
> >
> > preemption
> >
> > T2
> > unlock_pi()
> > loops with -EAGAIN forever
> >
> > Which is undesirable for PI primitives. Future patches will rectify
> > this.
> >
> > This describes the situation exactly. To prove, we developed a little
> > kernel patch that, on loop detection, puts a message into the kernel log for
> > just the first occurrence, keeps a count of the number of occurrences seen
> > since boot, and tries to break out of the loop via usleep_range(1000,1000).
> > Note that the patch is not really needed for replication. It merely shows,
> > by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> > the lockup.
> >
> > Along with this patch, we submit a replicator. Running this replicator
> > with this patch, it can be seen that 4.4.256 does not have the problem.
> > 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> > w/o the patch) does not have the problem.
> >
> > >From this pattern there may be some futex fixup patch that was ported
> > back into 4.9 but failed to make it to 4.4.
>
> Odd, I can't seem to find anything that we missed. Can you dig to see
> if there is something that we need to do here so we can resolve this?
>
> thanks,
> greg k-h
Hi Greg,
4.12 has these apparently-original patches:
73d786b futex: Rework inconsistent rt_mutex/futex_q state
cfafcd1 futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
I have verified that the first commit, 73d786b, introduces
the futex_unlock_pi infinite loop bug into 4.12. I have
also verified that the last commit, cfafcd1, fixes the bug.
4.9 has had both futex patches backported into it.
Verified that 4.9.276 does not suffer from the bug.
4.4 has had the first patch backported, as 394fc49, but
not the last. I have verified that building a kernel at
394fc49 and at v4.4.276, the bug is seen, and at 394fc49^,
the bug is not present.
The missing commit, cfafcd1 in 4.12, is present in 4.9
as 13c98b0. A visual spot-check of 13c98b0, as a patch,
with kernel/futex.c in 4.4.276 did not find any hunks of
13c98b0 present in 4.4.276's kernel/futex.c.
Regards,
Joe
Powered by blists - more mailing lists