[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53DAE512.5030302@cn.fujitsu.com>
Date: Fri, 1 Aug 2014 08:53:38 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: <paulmck@...ux.vnet.ibm.com>
CC: <linux-kernel@...r.kernel.org>, <mingo@...nel.org>,
<dipankar@...ibm.com>, <akpm@...ux-foundation.org>,
<mathieu.desnoyers@...icios.com>, <josh@...htriplett.org>,
<tglx@...utronix.de>, <peterz@...radead.org>,
<rostedt@...dmis.org>, <dhowells@...hat.com>,
<edumazet@...gle.com>, <dvhart@...ux.intel.com>,
<fweisbec@...il.com>, <oleg@...hat.com>, <bobby.prani@...il.com>
Subject: Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
On 08/01/2014 12:09 AM, Paul E. McKenney wrote:
>
>>> + /*
>>> + * There were callbacks, so we need to wait for an
>>> + * RCU-tasks grace period. Start off by scanning
>>> + * the task list for tasks that are not already
>>> + * voluntarily blocked. Mark these tasks and make
>>> + * a list of them in rcu_tasks_holdouts.
>>> + */
>>> + rcu_read_lock();
>>> + for_each_process_thread(g, t) {
>>> + if (t != current && ACCESS_ONCE(t->on_rq) &&
>>> + !is_idle_task(t)) {
>>
>> What happen when the trampoline is on the idle task?
>>
>> I think we need to use schedule_on_each_cpu() to replace one of
>> the synchronize_sched() in this function. (or other stuff which can
>> cause real schedule for *ALL* online CPUs).
>
> Well, that is one of the questions in the 0/10 cover letter. If it turns
> out to be necessary to worry about idle-task trampolines, it should be
> possible to avoid hammering all idle CPUs in the common case. Though maybe
> battery-powered devices won't need RCU-tasks.
>
trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
inside the trampoline). So only the real schedule on idle CPU is reliable
to me.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists