[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <EEB2FCE2-3C14-406E-BD0E-FD27D091C492@mit.edu>
Date: Tue, 6 Sep 2011 09:30:50 -0400
From: Theodore Tso <tytso@....EDU>
To: Tejun Heo <tj@...nel.org>
Cc: Anton Blanchard <anton@...ba.org>,
Eric Dumazet <eric.dumazet@...il.com>,
adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] percpu_counter: Put a reasonable upper bound on percpu_counter_batch
On Sep 5, 2011, at 11:48 PM, Tejun Heo wrote:
> On Mon, Aug 29, 2011 at 09:46:09PM +1000, Anton Blanchard wrote:
>>
>> When testing on a 1024 thread ppc64 box I noticed a large amount of
>> CPU time in ext4 code.
>>
>> ext4_has_free_blocks has a fast path to avoid summing every free and
>> dirty block per cpu counter, but only if the global count shows more
>> free blocks than the maximum amount that could be stored in all the
>> per cpu counters.
>>
>> Since percpu_counter_batch scales with num_online_cpus() and the maximum
>> amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
>> this breakpoint grows at O(n^2).
>>
>> This issue will also hit with users of percpu_counter_compare which
>> does a similar thing for one percpu counter.
>>
>> I chose to cap percpu_counter_batch at 1024 as a conservative first
>> step, but we may want to reduce it further based on further benchmarking.
>>
>> Signed-off-by: Anton Blanchard <anton@...ba.org>
>
> Applied to percpu/for-3.2.
Um, this was an ext4 patch and I pointed out it could cause problems. (Specifically, data loss…)
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists