[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200702152321.GD4781@hirez.programming.kicks-ass.net>
Date: Thu, 2 Jul 2020 17:23:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Phil Auld <pauld@...hat.com>
Cc: Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
vincent.guittot@...aro.org, mgorman@...e.de,
Oleg Nesterov <oleg@...hat.com>, david@...morbit.com
Subject: Re: [RFC][PATCH] sched: Better document ttwu()
On Thu, Jul 02, 2020 at 09:13:19AM -0400, Phil Auld wrote:
> On Thu, Jul 02, 2020 at 02:52:11PM +0200 Peter Zijlstra wrote:
> > + * p->on_cpu <- { 0, 1 }:
> > + *
> > + * is set by prepare_task() and cleared by finish_task() such that it will be
> > + * set before p is scheduled-in and cleared after p is scheduled-out, both
> > + * under rq->lock. Non-zero indicates the task is running on it's CPU.
>
> s/it's/its/
Fixed.
> > @@ -2494,15 +2608,41 @@ static void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
> > * @state: the mask of task states that can be woken
> > * @wake_flags: wake modifier flags (WF_*)
> > *
> > - * If (@state & @p->state) @p->state = TASK_RUNNING.
> > + * Conceptually does:
> > + *
> > + * If (@state & @p->state) @p->state = TASK_RUNNING.
> > *
> > * If the task was not queued/runnable, also place it back on a runqueue.
> > *
> > - * Atomic against schedule() which would dequeue a task, also see
> > - * set_current_state().
> > + * This function:
> > + * - is atomic against schedule() which would dequeue the task;
> > + * - issues a full memory barrier before accessing @p->state.
> > + * See the comment with set_current_state().
>
> I think these two above should not be " - " indented to match the other
> partial sentences below (or all the ones below should be bullets, but I
> think that is messier). But this is just a style quibble :)
Fair enough; I'll go rework that.
> > @@ -3134,8 +3274,12 @@ static inline void prepare_task(struct task_struct *next)
> > /*
> > * Claim the task as running, we do this before switching to it
> > * such that any running task will have this set.
> > + *
> > + * __schedule()'s rq->lock and smp_mb__after_spin_lock() orders this
> > + * store against prior state change of @next, also see
> > + * try_to_wake_up(), specifically smp_load_acquire(&p->on_cpu).
> > */
> > - next->on_cpu = 1;
> > + WRITE_ONCE(next->on_cpu, 1);
>
> This is more than a documentation change.
It had better be an effective no-change though; as documented we only
ever write 0 or 1, so even if the compiler is evil and tears our write,
it is impossible to get this wrong.
The reason I made it WRITE_ONCE() is because the other write is
smp_store_release() and the two loads are smp_load_acquire(), so a plain
store is 'weird'.
Powered by blists - more mailing lists