[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZwY-y4h7LruimB0O@jlelli-thinkpadt14gen4.remote.csb>
Date: Wed, 9 Oct 2024 09:28:59 +0100
From: Juri Lelli <juri.lelli@...hat.com>
To: Waiman Long <llong@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...lia.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-rt-users <linux-rt-users@...r.kernel.org>,
Valentin Schneider <vschneid@...hat.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: Futex hash_bucket lock can break isolation and cause priority
inversion on RT
Hi Waiman,
On 08/10/24 14:30, Waiman Long wrote:
> On 10/8/24 11:22 AM, Juri Lelli wrote:
...
> > Now, of course by making the latency sensitive application tasks use a
> > higher priority than anything on housekeeping CPUs we could avoid the
> > issue, but the fact that an implicit in-kernel link between otherwise
> > unrelated tasks might cause priority inversion is probably not ideal?
> > Thus this email.
> >
> > Does this report make any sense? If it does, has this issue ever been
> > reported and possibly discussed? I guess it’s kind of a corner case, but
> > I wonder if anybody has suggestions already on how to possibly try to
> > tackle it from a kernel perspective.
>
> Just a question. Is the low latency application using PI futex or the normal
> wait-wake futex? We could use separate set of hash buckets for these
> distinct futex types.
AFAIK it uses normal futexes (or a mix at best). Also I believe it
relies on libraries, so somewhat difficult to tell for certain.
Thanks,
Juri
Powered by blists - more mailing lists