[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOm-9arFu63A9YJ6yVtm6_LdtbRKZg1Q3dz8WugdkBBQfoOWYw@mail.gmail.com>
Date: Wed, 25 Jul 2018 13:26:25 +0200
From: Bruce Merry <bmerry@....ac.za>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Greg Thelen <gthelen@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, cgroups@...r.kernel.org,
Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH] memcg: reduce memcg tree traversals for stats collection
On 25 July 2018 at 00:46, Shakeel Butt <shakeelb@...gle.com> wrote:
> I ran a simple benchmark which reads the root_mem_cgroup's stat file
> 1000 times in the presense of 2500 memcgs on cgroup-v1. The results are:
>
> Without the patch:
> $ time ./read-root-stat-1000-times
>
> real 0m1.663s
> user 0m0.000s
> sys 0m1.660s
>
> With the patch:
> $ time ./read-root-stat-1000-times
>
> real 0m0.468s
> user 0m0.000s
> sys 0m0.467s
Thanks for cc'ing me. I've tried this patch using my test case and the
results are interesting. With the patch applied, running my script
only generates about 8000 new cgroups, compared to 40,000 before -
presumably because the optimisation has altered the timing.
On the other hand, if I run the script 5 times to generate 40000
zombie cgroups, the time to get stats for the root cgroup (cgroup-v1)
is almost unchanged at around 18ms (was 20ms, but there were slightly
more cgroups as well), compared to the almost 4x speedup you're seeing
in your test.
Regards
Bruce
--
Bruce Merry
Senior Science Processing Developer
SKA South Africa
Powered by blists - more mailing lists