[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080317042653.d5a85911.pj@sgi.com>
Date: Mon, 17 Mar 2008 04:26:53 -0500
From: Paul Jackson <pj@....com>
To: Andi Kleen <andi@...stfloor.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
nickpiggin@...oo.com.au, "Christoph Lameter" <clameter@....com>,
"Ken Chen" <kenchen@...gle.com>, Adam Litke <agl@...ibm.com>
Subject: Re: [PATCH] [0/18] GB pages hugetlb support
Andi wrote:
> I hacked in also cpuset support. It would be good if
> Paul double checked that.
Well, from what I can see, Ken Chen wrote the code that deals with
constraints on hugetlb allocation. So I'll copy him on this reply,
along with the other two subject matter experts I know of in this area,
Christoph Lameter and Adam Litke.
The following is the only cpuset related change I saw in this
patchset. It looks pretty obvious to me ... just changing the code to
adapt to Andi's new 'struct hstate' for holding what had been global
hugetlb state.
@@ -1228,18 +1252,18 @@ static int hugetlb_acct_memory(long delt
* semantics that cpuset has.
*/
if (delta > 0) {
- if (gather_surplus_pages(delta) < 0)
+ if (gather_surplus_pages(h, delta) < 0)
goto out;
- if (delta > cpuset_mems_nr(free_huge_pages_node)) {
- return_unused_surplus_pages(delta);
+ if (delta > cpuset_mems_nr(h->free_huge_pages_node)) {
+ return_unused_surplus_pages(h, delta);
goto out;
}
}
Andi claimed, in one of his replies earlier on this thread, that there
were further interactions with cpusets and later patches in the set
that "Add basic support for more than one hstate in hugetlbfs
and partly Add support to have individual hstates for each hugetlbfs
mount", but I'm not understanding what that interaction is yet.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@....com> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists