[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74998900-0f9f-0f17-6561-29f50d23cc91@gmail.com>
Date: Sat, 10 Sep 2016 17:51:02 +0800
From: Cheng Chao <cs.os.kernel@...il.com>
To: peterz@...radead.org
Cc: mingo@...nel.org, oleg@...hat.com, tj@...nel.org,
akpm@...ux-foundation.org, chris@...is-wilson.co.uk,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] stop_machine: Make migration_cpu_stop() does useful
works for CONFIG_PREEMPT_NONE
hi Peter, I guess you can receive the mail from me now,
I have changed the mailbox to gmail.
Oleg has already done much work for this patch, I am really obliged.
please review this patch, thanks.
on 09/10/2016 04:52 PM, Cheng Chao wrote:
> For CONFIG_PREEMPT_NONE=y, when sched_exec() needs migration, sched_exec()
> calls stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg).
>
> If the migration_cpu_stop() can not migrate,why do we call stop_one_cpu()?
> It just makes the task TASK_UNINTERRUPTIBLE, wakes up the stopper thread,
> executes migration_cpu_stop(), and the stopper thread wakes up the task.
>
> But in fact, all above works are almost useless(wasteful),the reason is
> migration_cpu_stop() can not migrate. why? migration_cpu_stop() needs the
> task is TASK_ON_RQ_QUEUED before it calls __migrate_task().
>
> This patch keeps the task TASK_RUNNING instead of TASK_UNINTERRUPTIBLE,
> so the migration_cpu_stop() can do useful works.
>
> Signed-off-by: Cheng Chao <cs.os.kernel@...il.com>
> ---
> kernel/stop_machine.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
> index 4a1ca5f..41aea5e 100644
> --- a/kernel/stop_machine.c
> +++ b/kernel/stop_machine.c
> @@ -126,6 +126,17 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
> cpu_stop_init_done(&done, 1);
> if (!cpu_stop_queue_work(cpu, &work))
> return -ENOENT;
> +
> +#if defined(CONFIG_PREEMPT_NONE)
> + /*
> + * Makes the stopper thread run as soon as possible.
> + * And if the caller is TASK_RUNNING, keeps the caller TASK_RUNNING.
> + * It's special useful for some callers which are expected to be
> + * TASK_ON_RQ_QUEUED.
> + * sched_exec does benefit from this improvement.
> + */
> + schedule();
> +#endif
> wait_for_completion(&done.completion);
> return done.ret;
> }
>
Powered by blists - more mailing lists