[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54e0445d-87a5-2eb3-2a7e-7521bc58f095@gmail.com>
Date: Tue, 13 Sep 2016 12:03:05 +0800
From: Cheng Chao <cs.os.kernel@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, oleg@...hat.com, tj@...nel.org,
akpm@...ux-foundation.org, chris@...is-wilson.co.uk,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] stop_machine: Make migration_cpu_stop() does useful
works for CONFIG_PREEMPT_NONE
Peter, thank you.
on 09/12/2016 07:41 PM, Peter Zijlstra wrote:
> On Mon, Sep 12, 2016 at 01:37:27PM +0200, Peter Zijlstra wrote:
>> So what you're saying is that migration_stop_cpu() doesn't work because
>> wait_for_completion() dequeues the task.
>>
>> True I suppose. Not sure I like your solution, nor your implementation
>> of the solution much though.
>>
>> I would much prefer an unconditional cond_resched() there, but also, I
>> think we should do what __migrate_swap_task() does, and set wake_cpu.
>>
>> So something like so..
>>
>> ---
>> kernel/sched/core.c | 8 ++++++--
>> 1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index ddd5f48551f1..ade772aa9610 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -1063,8 +1063,12 @@ static int migration_cpu_stop(void *data)
>> * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
>> * we're holding p->pi_lock.
>> */
>> - if (task_rq(p) == rq && task_on_rq_queued(p))
>> - rq = __migrate_task(rq, p, arg->dest_cpu);
>> + if (task_rq(p) == rq) {
>> + if (task_on_rq_queued(p))
>> + rq = __migrate_task(rq, p, arg->dest_cpu);
>> + else
>> + p->wake_cpu = arg->dest_cpu;
>> + }
>> raw_spin_unlock(&rq->lock);
>> raw_spin_unlock(&p->pi_lock);
>>
>
> And this, too narrow a constraint do git diff made it go away.
>
yes, set wake_cpu is better when try_to_wake_up().
Peter, Is it as a new patch?
> ---
> kernel/stop_machine.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
> index ae6f41fb9cba..637798d6b554 100644
> --- a/kernel/stop_machine.c
> +++ b/kernel/stop_machine.c
> @@ -121,6 +121,11 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
> cpu_stop_init_done(&done, 1);
> if (!cpu_stop_queue_work(cpu, &work))
> return -ENOENT;
> + /*
> + * In case @cpu == smp_proccessor_id() we can avoid a sleep+wakeup
> + * by doing a preemption.
> + */
> + cond_resched();
> wait_for_completion(&done.completion);
> return done.ret;
> }
>
I agree to use cond_resched(). https://lkml.org/lkml/2016/9/12/1228
for CONFIG_PREEMPT=y and CONFIG_PREEMPT_VOLUNTARY=y, this patch seems unnecessary.
1. when CONFIG_PREEMPT=y
stop_one_cpu()
->cpu_stop_queue_work()
->spin_unlock_irqrestore() (preempt_enable() calls __preempt_schedule())
2. when CONFIG_PREEMPT_VOLUNTARY=y
stop_one_cpu()
->wait_for_completion()
->...
->might_sleep() (calls _cond_resched()
so we really don't need "if defined(CONFIG_PREEMPT_NONE)"?
I also think without the "if defined(CONFIG_PREEMPT_NONE)",
the code is more clean,and the logic is still ok.
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 4a1ca5f..87464a2 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -126,6 +126,15 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
cpu_stop_init_done(&done, 1);
if (!cpu_stop_queue_work(cpu, &work))
return -ENOENT;
+
+#if defined(CONFIG_PREEMPT_NONE)
+ /*
+ * In case @cpu == smp_proccessor_id() we can avoid a sleep+wakeup
+ * by doing a preemption.
+ */
+ cond_resched();
+#endif
+
wait_for_completion(&done.completion);
return done.ret;
}
Powered by blists - more mailing lists