[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0901281146260.13825@qirst.com>
Date: Wed, 28 Jan 2009 11:48:27 -0500 (EST)
From: Christoph Lameter <cl@...ux-foundation.org>
To: "Luck, Tony" <tony.luck@...el.com>
cc: Rick Jones <rick.jones2@...com>,
David Miller <davem@...emloft.net>,
"tj@...nel.org" <tj@...nel.org>,
"rusty@...tcorp.com.au" <rusty@...tcorp.com.au>,
"mingo@...e.hu" <mingo@...e.hu>,
"herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hpa@...or.com" <hpa@...or.com>,
"brgerst@...il.com" <brgerst@...il.com>,
"ebiederm@...ssion.com" <ebiederm@...ssion.com>,
"travis@....com" <travis@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"steiner@....com" <steiner@....com>,
"hugh@...itas.com" <hugh@...itas.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"mathieu.desnoyers@...ymtl.ca" <mathieu.desnoyers@...ymtl.ca>,
"linux-ia64@...r.kernel.org" <linux-ia64@...r.kernel.org>
Subject: RE: [PATCH] percpu: add optimized generic percpu accessors
On Tue, 27 Jan 2009, Luck, Tony wrote:
> Managing a larger space could be done ... but at the expense of making
> the Alt-DTLB miss handler do a memory lookup to find the physical address
> of the per-cpu page needed (assuming that we allocate a bunch of random
> physical pages for use as per-cpu space rather than a single contiguous
> block of physical memory).
We cannot resize the area by using a single larger TLB entry?
> When do we know the total amount of per-cpu memory needed?
> 1) CONFIG time?
Would be easiest.
> 2) Boot time?
We could make the TLB entry size configurable with a kernel parameter
> 3) Arbitrary run time?
We could reserve a larger virtual space and switch TLB entries as needed?
We would need do get larger order pages to do this.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists