[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101214173817.GH5638@random.random>
Date: Tue, 14 Dec 2010 18:38:17 +0100
From: Andrea Arcangeli <aarcange@...hat.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Mel Gorman <mel@....ul.ie>, linux-mm@...ck.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Marcelo Tosatti <mtosatti@...hat.com>,
Adam Litke <agl@...ibm.com>, Avi Kivity <avi@...hat.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Rik van Riel <riel@...hat.com>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Ingo Molnar <mingo@...e.hu>, Mike Travis <travis@....com>,
Christoph Lameter <cl@...ux-foundation.org>,
Chris Wright <chrisw@...s-sol.org>, bpicco@...hat.com,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Chris Mason <chris.mason@...cle.com>,
Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH 36 of 66] memcg compound
Hello Kame,
On Fri, Nov 19, 2010 at 10:10:41AM +0900, KAMEZAWA Hiroyuki wrote:
> If there are requirements of big page > 4GB, unsigned long should be used.
There aren't, it's unthinkable at the moment even 1G (besides it seems
on some implementations the CPU lacks a real 1G tlb and just prefetch
the same 2M tlb so making the tlb miss read one less cacheline but in
practice 1G don't seem to provide any measurable runtime compared to
2M THP). Things will always be exponential, the benefit provided going
from 4k to 2M will always be an order of magnitude bigger than the
benefit from 2M to 1G, no matter how much the hardware is handling the
1G pages natively.
> > > int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
> > > struct mem_cgroup *mem = NULL;
> > > int ret;
> > > - int csize = CHARGE_SIZE;
> > > + int csize = max(CHARGE_SIZE, (unsigned long) page_size);
> > >
>
> unsigned long here.
This is to shut off a warning because CHARGE_SIZE is calculated as
multiple of PAGE_SIZE which is unsigned long. csize was already int
but unsigned long is not required. Like you point out it's not worth
batching even pages as small as 2G, so no need of unsigned long.
> > > @@ -2491,14 +2503,14 @@ __do_uncharge(struct mem_cgroup *mem, co
> > > if (batch->memcg != mem)
> > > goto direct_uncharge;
> > > /* remember freed charge and uncharge it later */
> > > - batch->bytes += PAGE_SIZE;
> > > + batch->bytes += page_size;
>
> Hmm, isn't it simpler to avoid batched-uncharge when page_size > PAGE_SIZE ?
As you wish, so I'm changing it like this.
archs where the pmd is implemented purely in software might actually
be able to use page sizes smaller than 2M that may make sense to
batch, but for now if you think this is simpler I'll go for it. We
need simple.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2503,6 +2503,9 @@ __do_uncharge(struct mem_cgroup *mem, co
if (!batch->do_batch || test_thread_flag(TIF_MEMDIE))
goto direct_uncharge;
+ if (page_size != PAGE_SIZE)
+ goto direct_uncharge;
+
/*
* In typical case, batch->memcg == mem. This means we can
* merge a series of uncharges to an uncharge of res_counter.
@@ -2511,9 +2514,9 @@ __do_uncharge(struct mem_cgroup *mem, co
if (batch->memcg != mem)
goto direct_uncharge;
/* remember freed charge and uncharge it later */
- batch->bytes += page_size;
+ batch->bytes += PAGE_SIZE;
if (uncharge_memsw)
- batch->memsw_bytes += page_size;
+ batch->memsw_bytes += PAGE_SIZE;
return;
direct_uncharge:
res_counter_uncharge(&mem->res, page_size);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists