[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131120065840.GA10839@weiyang.vnet.ibm.com>
Date: Wed, 20 Nov 2013 14:58:40 +0800
From: Wei Yang <weiyang@...ux.vnet.ibm.com>
To: Tejun Heo <tj@...nel.org>
Cc: Wei Yang <weiyang@...ux.vnet.ibm.com>, cl@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] percpu: stop the loop when a cpu belongs to a new
group
On Wed, Nov 20, 2013 at 12:51:21AM -0500, Tejun Heo wrote:
>Hello,
>
>On Wed, Nov 20, 2013 at 11:00:56AM +0800, Wei Yang wrote:
>> What do you think about this one?
>>
>> >
>> >From bd70498b9df47b25ff20054e24bb510c5430c0c3 Mon Sep 17 00:00:00 2001
>> >From: Wei Yang <weiyang@...ux.vnet.ibm.com>
>> >Date: Thu, 10 Oct 2013 09:42:14 +0800
>> >Subject: [PATCH] percpu: optimize group assignment when cpu_distance_fn is
>> > NULL
>> >
>> >When cpu_distance_fn is NULL, all CPUs belongs to group 0. The original logic
>> >will continue to go through each CPU and its predecessor. cpu_distance_fn is
>> >always NULL when pcpu_build_alloc_info() is called from pcpu_page_first_chunk().
>> >
>> >By applying this patch, the time complexity will drop to O(n) form O(n^2) in
>> >case cpu_distance_fn is NULL.
>
>The test was put in the inner loop because the nesting was already too
>deep and cpu_distance_fn is unlikely to be NULL on machines where the
>number of CPUs is high enough to matter. If that O(n^2) loop is gonna
>be a problem, it's gonna be a problem on large NUMA machines and we'll
>have to do something about it for cases where cpu_distance_fn exists
>anyway.
Tejun,
Yep, hope this will not bring some problem on a large NUMA machie when
cpu_distance_fn is not NULL.
>
>The patch is just extremely marginal. Ah well... why not? I'll apply
>it once -rc1 drops.
>
>Thanks.
>
>--
>tejun
--
Richard Yang
Help you, Help me
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists