[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4A0C46B80200007800000ED4@vpn.id2.novell.com>
Date: Thu, 14 May 2009 15:28:40 +0100
From: "Jan Beulich" <JBeulich@...ell.com>
To: "Tejun Heo" <tj@...nel.org>
Cc: <mingo@...e.hu>, <andi@...stfloor.org>, <tglx@...utronix.de>,
<linux-kernel@...r.kernel.org>,
<linux-kernel-owner@...r.kernel.org>, <hpa@...or.com>
Subject: Re: [GIT PATCH] x86,percpu: fix pageattr handling with remap
allocator
>>> Tejun Heo <tj@...nel.org> 14.05.09 14:49 >>>
>The remap allocator allocates a PMD page per cpu, returns whatever is
>unnecessary to the page allocator and remaps the PMD page into vmalloc
>area to construct the first percpu chunk. This is to take advantage
>of large page mapping. However this creates active aliases for the
>recycled pages. When some user allocates the recycled pages and tries
>to change pageattr on it, remapped PMD alias might end up having
>different attributes with the regular page mapped addresses which can
>lead to subtle data corruption according to Andi.
In order to reduce the amount of work to do during lookup as well as the
chance of having a collision at all, wouldn't it be reasonable to use as much
of an allocated 2/4M page as possible rather than returning whatever is
left after a single CPU got its per-CPU memory chunk from it? I.e. you'd
return only those (few) pages that either don't fit another CPU's chunk
anymore or that are left after running through all CPUs.
Or is there some hidden requirement that each CPU's per-CPU area must
start on a PMD boundary?
This would additionally address a potential problem on 32-bits - currently,
for a 32-CPU system you consume half of the vmalloc space with PAE (on
non-PAE you'd even exhaust it, but I think it's unreasonable to expect a
system having 32 CPUs to not need PAE).
Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists