[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110228184821.f10dba19.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 28 Feb 2011 18:48:21 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Michal Hocko <mhocko@...e.cz>
Cc: Dave Hansen <dave@...ux.vnet.ibm.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] page_cgroup: Reduce allocation overhead for
page_cgroup array for CONFIG_SPARSEMEM v4
On Mon, 28 Feb 2011 10:53:16 +0100
Michal Hocko <mhocko@...e.cz> wrote:
> On Mon 28-02-11 18:23:22, KAMEZAWA Hiroyuki wrote:
> [...]
> > > From 84a9555741b59cb2a0a67b023e4bd0f92c670ca1 Mon Sep 17 00:00:00 2001
> > > From: Michal Hocko <mhocko@...e.cz>
> > > Date: Thu, 24 Feb 2011 11:25:44 +0100
> > > Subject: [PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM
> > >
> > > Currently we are allocating a single page_cgroup array per memory
> > > section (stored in mem_section->base) when CONFIG_SPARSEMEM is selected.
> > > This is correct but memory inefficient solution because the allocated
> > > memory (unless we fall back to vmalloc) is not kmalloc friendly:
> > > - 32b - 16384 entries (20B per entry) fit into 327680B so the
> > > 524288B slab cache is used
> > > - 32b with PAE - 131072 entries with 2621440B fit into 4194304B
> > > - 64b - 32768 entries (40B per entry) fit into 2097152 cache
> > >
> > > This is ~37% wasted space per memory section and it sumps up for the
> > > whole memory. On a x86_64 machine it is something like 6MB per 1GB of
> > > RAM.
> > >
> > > We can reduce the internal fragmentation by using alloc_pages_exact
> > > which allocates PAGE_SIZE aligned blocks so we will get down to <4kB
> > > wasted memory per section which is much better.
> > >
> > > We still need a fallback to vmalloc because we have no guarantees that
> > > we will have a continuous memory of that size (order-10) later on during
> > > the hotplug events.
> > >
> > > Signed-off-by: Michal Hocko <mhocko@...e.cz>
> > > CC: Dave Hansen <dave@...ux.vnet.ibm.com>
> > > CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> >
> > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> Thanks. I will repost it with Andrew in the CC.
>
> >
> > But...nitpick, it may be from my fault..
> [...]
> > > +static void free_page_cgroup(void *addr)
> > > +{
> > > + if (is_vmalloc_addr(addr)) {
> > > + vfree(addr);
> > > + } else {
> > > + struct page *page = virt_to_page(addr);
> > > + if (!PageReserved(page)) { /* Is bootmem ? */
> >
> > I think we never see PageReserved if we just use alloc_pages_exact()/vmalloc().
>
> I have checked that and we really do not (unless I am missing some
> subtle side effects). Anyway, I think we still should at least BUG_ON on
> that.
>
> > Maybe my old patch was not enough and this kind of junks are remaining in
> > the original code.
>
> Should I incorporate it into the patch. I think that a separate one
> would be better for readability.
>
> ---
> From e7a897a42b526620eb4afada2d036e1c9ff9e62a Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@...e.cz>
> Date: Mon, 28 Feb 2011 10:43:12 +0100
> Subject: [PATCH] page_cgroup array is never stored on reserved pages
>
> KAMEZAWA Hiroyuki noted that free_pages_cgroup doesn't have to check for
> PageReserved because we never store the array on reserved pages
> (neither alloc_pages_exact nor vmalloc use those pages).
>
> So we can replace the check by a BUG_ON.
>
> Signed-off-by: Michal Hocko <mhocko@...e.cz>
> CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Thank you.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists