[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242436626.27006.8623.camel@localhost.localdomain>
Date: Fri, 15 May 2009 18:17:06 -0700
From: Suresh Siddha <suresh.b.siddha@...el.com>
To: Tejun Heo <tj@...nel.org>
Cc: "JBeulich@...ell.com" <JBeulich@...ell.com>,
"andi@...stfloor.org" <andi@...stfloor.org>,
"mingo@...e.hu" <mingo@...e.hu>,
"linux-kernel-owner@...r.kernel.org"
<linux-kernel-owner@...r.kernel.org>,
"hpa@...or.com" <hpa@...or.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PATCH] x86,percpu: fix pageattr handling with remap
allocator
On Thu, 2009-05-14 at 05:49 -0700, Tejun Heo wrote:
> Hello,
>
> Upon ack, please pull from the following git tree.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git x86-percpu-pageattr
>
> This patchset fixes subtile bug in pageattr handling when remap percpu
> first chunk allocator is in use and implements percpu_alloc kernel
> parameter so that allocator can be selected from boot prompt.
>
> This problem was spotted by Jan Beulich.
>
> The remap allocator allocates a PMD page per cpu, returns whatever is
> unnecessary to the page allocator and remaps the PMD page into vmalloc
> area to construct the first percpu chunk. This is to take advantage
> of large page mapping.
Tejun, Can you please educate me why we need to map this first percpu
chunk (which is pre-allocated during boot and is physically contiguous)
into vmalloc area? Perhaps even for the other dynamically allocated
secondary chunks? (as far as I can see, all the chunk allocations seems
to be physically contiguous and later mapped into vmalloc area)..
That should simplify these things quite a bit(atleast for first percpu
chunk). I am missing something obvious I guess.
thanks,
suresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists