[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150929151713.GO3816@twins.programming.kicks-ass.net>
Date: Tue, 29 Sep 2015 17:17:13 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, fweisbec@...il.com, oleg@...hat.com,
umgwanakikbuti@...il.com, tglx@...utronix.de
Subject: Re: [RFC][PATCH 03/11] sched: Robustify preemption leak checks
On Tue, Sep 29, 2015 at 11:07:34AM -0400, Steven Rostedt wrote:
> On Tue, 29 Sep 2015 11:28:28 +0200
> Peter Zijlstra <peterz@...radead.org> wrote:
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -2960,8 +2960,10 @@ static inline void schedule_debug(struct
> > * schedule() atomically, we ignore that path. Otherwise whine
> > * if we are scheduling when we should not.
> > */
> > - if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD))
> > + if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) {
> > __schedule_bug(prev);
> > + preempt_count_set(PREEMPT_DISABLED);
> > + }
>
> Of course, if this was not a preemption leak, but something that called
> schedule within a preempt_disable()/preempt_enable() section, when it
> returns, preemption will be enabled, right?
Indeed.. But it ensures only the task that incorrectly called schedule()
gets screwed and not everybody else.
This is most important on x86 which has a per cpu preempt_count that is
not saved/restored (after this series). So if you schedule with an
invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is
messed up too.
Enforcing this invariant limits the borkage to just the one task.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists