[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <50C75F1C.4000907@linux.vnet.ibm.com>
Date: Tue, 11 Dec 2012 21:58:12 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Tejun Heo <tj@...nel.org>
CC: Oleg Nesterov <oleg@...hat.com>, tglx@...utronix.de,
peterz@...radead.org, paulmck@...ux.vnet.ibm.com,
rusty@...tcorp.com.au, mingo@...nel.org, akpm@...ux-foundation.org,
namhyung@...nel.org, vincent.guittot@...aro.org, sbw@....edu,
amit.kucheria@...aro.org, rostedt@...dmis.org, rjw@...k.pl,
wangyun@...ux.vnet.ibm.com, xiaoguangrong@...ux.vnet.ibm.com,
nikunj@...ux.vnet.ibm.com, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v3 1/9] CPU hotplug: Provide APIs to prevent CPU offline
from atomic context
On 12/11/2012 07:37 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Dec 11, 2012 at 07:32:13PM +0530, Srivatsa S. Bhat wrote:
>> On 12/11/2012 07:17 PM, Tejun Heo wrote:
>>> Hello, Srivatsa.
>>>
>>> On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
>>>> This approach (of using synchronize_sched()) also looks good. It is simple,
>>>> yet effective, but unfortunately inefficient at the writer side (because
>>>> he'll have to wait for a full synchronize_sched()).
>>>
>>> While synchornize_sched() is heavier on the writer side than the
>>> originally posted version, it doesn't stall the whole machine and
>>> wouldn't introduce latencies to others. Shouldn't that be enough?
>>>
>>
>> Short answer: Yes. But we can do better, with almost comparable code
>> complexity. So I'm tempted to try that out.
>>
>> Long answer:
>> Even in the synchronize_sched() approach, we still have to identify the
>> readers who need to be converted to use the new get/put_online_cpus_atomic()
>> APIs and convert them. Then, if we can come up with a scheme such that
>> the writer has to wait only for those readers to complete, then why not?
>>
>> If such a scheme ends up becoming too complicated, then I agree, we
>> can use synchronize_sched() itself. (That's what I meant by saying that
>> we'll use this as a fallback).
>>
>> But even in this scheme which uses synchronize_sched(), we are
>> already half-way through (we already use 2 types of sync schemes -
>> counters and rwlocks). Just a little more logic can get rid of the
>> unnecessary full-wait too.. So why not give it a shot?
>
> It's not really about the code complexity but making the reader side
> as light as possible. Please keep in mind that reader side is still
> *way* more hotter than the writer side. Before, the writer side was
> heavy to the extent which causes noticeable disruptions on the whole
> system and I think that's what we're trying to hunt down here. If we
> can shave of memory barriers from reader side by using
> synchornized_sched() on writer side, that is the *better* result, not
> worse.
>
That's a fair point. Hmmm...
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists