[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5480BFAA.2020106@ixiacom.com>
Date: Thu, 4 Dec 2014 22:10:18 +0200
From: Leonard Crestez <lcrestez@...acom.com>
To: Tejun Heo <tj@...nel.org>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Christoph Lameter <cl@...ux-foundation.org>,
Sorin Dumitru <sdumitru@...acom.com>
Subject: Re: [RFC v2] percpu: Add a separate function to merge free areas
On 12/04/2014 07:57 PM, Tejun Heo wrote:
> Hello,
>
> On Wed, Dec 03, 2014 at 12:33:59AM +0200, Leonard Crestez wrote:
>> It seems that free_percpu performance is very bad when working with small
>> objects. The easiest way to reproduce this is to allocate and then free a large
>> number of percpu int counters in order. Small objects (reference counters and
>> pointers) are common users of alloc_percpu and I think this should be fast.
>> This particular issue can be encountered with very large number of net_device
>> structs.
>
> Do you actually experience this with an actual workload? The thing is
> allocation has the same quadratic complexity. If this is actually an
> issue (which can definitely be the case), I'd much prefer implementing
> a properly scalable area allocator than mucking with the current
> implementation.
Yes, we are actually experiencing issues with this. We create lots of virtual
net_devices and routes, which means lots of percpu counters/pointers. In particular
we are getting worse performance than in older kernels because the net_device refcnt
is now a percpu counter. We could turn that back into a single integer but this
would negate an upstream optimization.
We are working on top of linux_3.10. We already pulled some allocation optimizations.
At least for simple allocation patterns pcpu_alloc does not appear to be unreasonably
slow.
Having a "properly scalable" percpu allocator would be quite nice indeed.
Regards,
Leonard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists