[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150908073116.GA6565@gmail.com>
Date: Tue, 8 Sep 2015 09:31:16 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
linux-rt-users <linux-rt-users@...r.kernel.org>,
Carsten Emde <C.Emde@...dl.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
John Kacur <jkacur@...hat.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Peter Zijlstra <peterz@...radead.org>,
Clark Williams <clark.williams@...il.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [RFC][PATCH RT 0/3] RT: Fix trylock deadlock without msleep()
hack
* Thomas Gleixner <tglx@...utronix.de> wrote:
> 3) sched_yield() makes me shudder
>
> CPU0 CPU1
>
> taskA
> lock(x->lock)
>
> preemption
> taskC
> taskB
> lock(y->lock);
> x = y->x;
> if (!try_lock(x->lock)) {
> unlock(y->lock);
> boost(taskA);
> sched_yield(); <- returns immediately
So I'm still struggling with properly parsing the usecase.
If y->x might become invalid the moment we drop y->lock, what makes the 'taskA'
use (after we've dropped y->lock) safe? Shouldn't we at least also have a
task_get(taskA)/task_put(taskA) reference count, to make sure the boosted task
stays around?
And if we are into getting reference counts, why not solve it at a higher level
and get a reference count to 'x' to make sure it's safe to use? Then we could do:
lock(y->lock);
retry:
x = y->x;
if (!trylock(x->lock)) {
get_ref(x->count)
unlock(y->lock);
lock(x->lock);
lock(y->lock);
put_ref(x->count);
if (y->x != x) { /* Retry if 'x' got dropped meanwhile */
unlock(x->lock);
goto retry;
}
}
Or so.
Note how much safer this sequence is, and still just as fast in the common case
(which I suppose is the main motivation within dcache.c?).
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists