[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89a992af-67cd-91b4-8890-a19ccb251fe6@kedacom.com>
Date: Wed, 7 Sep 2016 11:22:28 +0800
From: chengchao <chengchao@...acom.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: mingo@...nel.org, peterz@...radead.org, tj@...nel.org,
akpm@...ux-foundation.org, chris@...is-wilson.co.uk,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/core: simpler function for sched_exec migration
Oleg, thank you very much.
on 09/06/2016 11:22 PM, Oleg Nesterov wrote:
> On 09/06, chengchao wrote:
>>
>> the key point is for CONFIG_PREEMPT_NONE=y,
>> ...
>> it is too much overhead for one task(fork()+exec()), isn't it?
>
> Yes, yes, I see, this is suboptimal. Not sure we actually do care,
> but yes, perhaps another helper which migrates the current task makes
> sense, I dunno.
for CONFIG_PREEMPT_NONE=y, this patch wants the stopper thread can migrate the current
successfully instead of doing nothing.
>
> But,
>
>>> stop_one_cpu_sync() assumes that cpu == smp_processor_id/task_cpu(current),
>>> and thus the stopper thread should preempt us at least after schedule()
>>> (if CONFIG_PREEMPT_NONE), so we do not need to synchronize.
>>>
>> yes. the stop_one_cpu_sync is not a good name, stop_one_cpu_schedule is better?
>> there is nothing about synchronization.
>
> We need to synchronize with the stopper to ensure it can't touch
> cpu_stop_work on stack after stop_one_cpu_sync() returns, and
yes, you are right.
>
>>> But this is not necessarily true? This task can migrate to another CPU
>>> before cpu_stop_queue_work() ?
>>>
>> before sched_exec() calls stop_one_cpu()/cpu_stop_queue_work(), this
>> task(current) cannot migrate to another cpu,because this task is running
>> on the cpu.
>
> Why? The running task can migrate to another CPU at any moment. Unless it
> runs with preemption disabled or CONFIG_PREEMPT_NONE=y.
yes, this patch focused the CONFIG_PREEMPT_NONE=y at the beginning, so I didn't
pay more attention to the CONFIG_PREEMPT=y and CONFIG_PREEMPT_VOLUNTARY=y.
>
> And this means that cpu_stop_queue_work() can queue the work on another
> CPU != smp_processor_id(), and in this case the kernel can crash because
> the pending cpu_stop_work can be overwritten right after return.
>
> So you need something like
>
> void stop_one_cpu_sync(cpu_stop_fn_t fn, void *arg)
> {
> struct cpu_stop_work work = { .fn = fn, .arg = arg, .done = NULL };
>
> preempt_disable();
> cpu_stop_queue_work(raw_smp_processor_id(), &work);
> preempt_enable_no_resched();
> schedule();
> }
>
> or I am totally confused. Note that it doesn't (and shouldn't) have
> the "int cpu" argument.
>
if preempt happens after preempt_enable_no_resched(), there is still risky that the
stop_one_cpu_sync() returns before the stopper thread can use cpu_stop_work safely.
as you said previously.
thus, I modify the patch:
int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
{
struct cpu_stop_done done;
struct cpu_stop_work work = { .fn = fn, .arg = arg, .done = &done };
cpu_stop_init_done(&done, 1);
if (!cpu_stop_queue_work(cpu, &work))
return -ENOENT;
#if defined(CONFIG_PREEMPT_NONE)
/*
* let the stopper thread runs as soon as possible,
* and keep current TASK_RUNNING.
*/
scheudle();
#endif
wait_for_completion(&done.completion);
return done.ret;
}
remove the new function stop_one_cpu_sync(). When I posted this patch, I didn't want to
modify the stop_one_cpu(), because there are many functions to call the stop_one_cpu().
but now, I think it's good place to modify.
Any suggestions? thanks again.
> Oleg.
>
>
Powered by blists - more mailing lists