[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160615072507.GS5981@e106622-lin>
Date: Wed, 15 Jun 2016 08:25:07 +0100
From: Juri Lelli <juri.lelli@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, tglx@...utronix.de, rostedt@...dmis.org,
xlpang@...hat.com, linux-kernel@...r.kernel.org,
mathieu.desnoyers@...icios.com, jdesfossez@...icios.com,
bristot@...hat.com
Subject: Re: [RFC][PATCH 8/8] rtmutex: Fix PI chain order integrity
On 14/06/16 21:44, Peter Zijlstra wrote:
> On Tue, Jun 14, 2016 at 06:39:08PM +0100, Juri Lelli wrote:
> > On 07/06/16 21:56, Peter Zijlstra wrote:
> > > rt_mutex_waiter::prio is a copy of task_struct::prio which is updated
> > > during the PI chain walk, such that the PI chain order isn't messed up
> > > by (asynchronous) task state updates.
> > >
> > > Currently rt_mutex_waiter_less() uses task state for deadline tasks;
> > > this is broken, since the task state can, as said above, change
> > > asynchronously, causing the RB tree order to change without actual
> > > tree update -> FAIL.
> > >
> > > Fix this by also copying the deadline into the rt_mutex_waiter state
> > > and updating it along with its prio field.
> > >
> > > Ideally we would also force PI chain updates whenever DL tasks update
> > > their deadline parameter, but for first approximation this is less
> > > broken than it was.
> > >
> >
> > The patch looks OK to me. However, I'm failing to see when we can update
> > dl.deadline of a waiter asynchronously. Since a waiter is blocked, we
> > can't really change his dl.deadline by calling setscheduler on him, as
> > the update would operate on dl.dl_deadline. The new values will start to
> > be used as soon as it gets unblocked. The situation seems different for
> > RT tasks, for which priority change takes effect immediately.
> >
> > What am I missing? :-)
>
> Ah, I missed the dl_deadline vs deadline thing. Still, with optimistic
> spinning the waiter could hit its throttle/refresh path, right? And then
> that would update deadline.
>
I guess it's not that likely, but yes it could potentially happen that a
waiter is optimistically spinning, depletes its runtime, gets throttled
and then replenished when still spinning. Maybe it doesn't really make
sense continuing spinning in this situation, but I guess things get
really complicated. :-/
Anyway, as said, I think this patch is OK. Maybe we want to add a
comment just to remember what situation can cause an issue if we don't
do this? Patch changelog would be OK as well for such a comment IMHO.
Thanks,
- Juri
Powered by blists - more mailing lists