[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6599ad830809111050h27d22369m8fc4faa47920f784@mail.gmail.com>
Date: Thu, 11 Sep 2008 10:50:44 -0700
From: "Paul Menage" <menage@...gle.com>
To: "Lai Jiangshan" <laijs@...fujitsu.com>,
"Andrew Morton" <akpm@...ux-foundation.org>
Cc: "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH -mm 2/2] cgroup: fold struct cg_cgroup_link into struct css_set
On Thu, Sep 11, 2008 at 7:23 AM, Lai Jiangshan <laijs@...fujitsu.com> wrote:
>
> one struct cg_cgroup_link per link is very waste.
> This way need to allocate struct cg_cgroup_link for
> (css_set_count * hierarchy_count) times.
Correct - but in the common case, hierarchy_count==1. So it's 7
pointers (two list_heads and a pointer in the cg_cgroup_link, and a
list_head in the css_set) per css_set. Each additional hierarchy
introduces 5 pointers per css_set with the new cg_cgroup_link. (So
overall, 2 + 5*H)
We're up to 9 cgroup subsystems in -mm right now, so your approach
consumes 18 pointers (9 list heads) per css_set, regardless of how
many hierarchies are mounted.
So I think that the current solution saves memory when there are fewer
than four hierarchies mounted. As the number of cgroup subsystems
increases, the break-even point will increase. I expect that the most
common number of mounted hierarchies will be 0 (but there's only one
css_set in that case, so the overhead is irrelevant) or 1.
>
> This patch removes lots of line of code. remove struct cg_cgroup_link
> and corresponding code.
Yes, the code is definitely simpler after your patch. It's pretty much
how it was *before* I introduced the cg_cgroup_link structures to
allow an arbitrary number of hierarchies without bloating the css_set
structure. I'm not convinced that we want to go back to the original
way.
Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists