[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 12 Dec 2018 11:26:56 +0100
From: Daniel Vetter <daniel@...ll.ch>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Michal Hocko <mhocko@...nel.org>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Intel Graphics Development <intel-gfx@...ts.freedesktop.org>,
DRI Development <dri-devel@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Christian König <christian.koenig@....com>,
Jérôme Glisse <jglisse@...hat.com>,
Daniel Vetter <daniel.vetter@...el.com>
Subject: Re: [PATCH 2/4] kernel.h: Add non_block_start/end()
On Mon, Dec 10, 2018 at 05:30:09PM +0100, Peter Zijlstra wrote:
> On Mon, Dec 10, 2018 at 05:20:10PM +0100, Michal Hocko wrote:
> > > OK, no real objections to the thing. Just so long we're all on the same
> > > page as to what it does and doesn't do ;-)
> >
> > I am not really sure whether there are other potential users besides
> > this one and whether the check as such is justified.
>
> It's a debug option...
>
> > > I suppose you could extend the check to include schedule_debug() as
> > > well, maybe something like:
> >
> > Do you mean to make the check cheaper?
>
> Nah, so the patch only touched might_sleep(), the below touches
> schedule().
>
> If there were a patch that hits schedule() without going through a
> might_sleep() (rare in practise I think, but entirely possible) then you
> won't get a splat without something like the below on top.
We have a bunch of schedule() calls in i915, for e.g. waiting for multiple
events at the same time (when we want to unblock if any of them fire). And
there's no might_sleep in these cases afaict. Adding the check in
schedule() sounds useful, I'll include your snippet in v2. Plus try a bit
better to explain in the commit message why Michal suggested these.
Thanks, Daniel
>
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index f66920173370..b1aaa278f1af 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -3278,13 +3278,18 @@ static noinline void __schedule_bug(struct task_struct *prev)
> > > /*
> > > * Various schedule()-time debugging checks and statistics:
> > > */
> > > -static inline void schedule_debug(struct task_struct *prev)
> > > +static inline void schedule_debug(struct task_struct *prev, bool preempt)
> > > {
> > > #ifdef CONFIG_SCHED_STACK_END_CHECK
> > > if (task_stack_end_corrupted(prev))
> > > panic("corrupted stack end detected inside scheduler\n");
> > > #endif
> > >
> > > +#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
> > > + if (!preempt && prev->state && prev->non_block_count)
> > > + // splat
> > > +#endif
> > > +
> > > if (unlikely(in_atomic_preempt_off())) {
> > > __schedule_bug(prev);
> > > preempt_count_set(PREEMPT_DISABLED);
> > > @@ -3391,7 +3396,7 @@ static void __sched notrace __schedule(bool preempt)
> > > rq = cpu_rq(cpu);
> > > prev = rq->curr;
> > >
> > > - schedule_debug(prev);
> > > + schedule_debug(prev, preempt);
> > >
> > > if (sched_feat(HRTICK))
> > > hrtick_clear(rq);
> >
> > --
> > Michal Hocko
> > SUSE Labs
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists