[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130615053912.GB7017@htj.dyndns.org>
Date: Fri, 14 Jun 2013 22:39:12 -0700
From: Tejun Heo <tj@...nel.org>
To: Michal Hocko <mhocko@...e.cz>
Cc: lizefan@...wei.com, containers@...ts.linux-foundation.org,
cgroups@...r.kernel.org, koverstreet@...gle.com,
linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
Mike Snitzer <snitzer@...hat.com>,
Vivek Goyal <vgoyal@...hat.com>,
"Alasdair G. Kergon" <agk@...hat.com>,
Jens Axboe <axboe@...nel.dk>,
Mikulas Patocka <mpatocka@...hat.com>,
Glauber Costa <glommer@...il.com>
Subject: Re: [PATCH 11/11] cgroup: use percpu refcnt for cgroup_subsys_states
On Fri, Jun 14, 2013 at 10:35:22PM -0700, Tejun Heo wrote:
> On Fri, Jun 14, 2013 at 03:31:25PM -0700, Tejun Heo wrote:
> > I'll play with it a bit more on an actual machine and post more
> > results. Test program attached.
>
> So, here are the results from the same test on a dual-socket 2-way
> NUMA opteron 8 core machine.
>
> Running on one CPU.
>
> copy size atomic percpu diff in pct
> 0 535964443 616756827 +15.07%
> 32 399988186 378678713 -5.33%
> 64 389067476 355073979 -8.74%
> 128 342192631 315615300 -7.77%
> 256 281208005 260598931 -7.33%
> 512 188070912 193225269 +2.74%
>
> Running on all eight cores.
>
> copy size atomic percpu diff in pct
> 0 121324328 4889425511 +3,930.05%
> 32 96170193 2999613380 +3,019.07%
> 64 98139061 2813894184 +2,767.25%
> 128 112610025 2503229487 +2,122.92%
> 256 96828114 2069865752 +2,037.67%
> 512 95858297 1537726109 +1,504.17%
A bit of addition, this of course is completely synthetic and
exaggerates the differences both ways, but it's pretty clear that this
is gonna be a clear gain in any kind of workload which would generate
some amount of cross-CPU refcnting, which would be the norm anyway.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists