[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0901151424360.28153@quilx.com>
Date: Thu, 15 Jan 2009 14:26:26 -0600 (CST)
From: Christoph Lameter <cl@...ux-foundation.org>
To: Rusty Russell <rusty@...tcorp.com.au>
cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...e.hu>,
travis@....com,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, steiner@....com,
Hugh Dickins <hugh@...itas.com>
Subject: Re: regarding the x86_64 zero-based percpu patches
On Thu, 15 Jan 2009, Rusty Russell wrote:
> On Tuesday 13 January 2009 14:37:38 Eric W. Biederman wrote:
> > It isn't incompatible with a per-cpu virtual mapping. It allows the
> > possibility of each cpu reusing the same chunk of virtual address
> > space for per cpu memory.
>
> This can be done (IA64 does it today), but it's not generically useful.
> You can use it to frob a few simple values, but it means you can't store
> any pointers, and that just doesn't fly in general kernel code.
Well if we can have some surelty that we are not going to store pointers
to percpu data anywhere then this would work.
> > I think it would be nice if the percpu area could grow and would not be
> > a fixed size at boot time, I'm not particularly convinced it has to.
>
> I used to be convinced it had to grow, but Christoph showed otherwise.
> Nonetheless, it's an annoying restriction which is going to bite us in
> the ass repeatedly as coders use per_cpu on random sizes.
Not exactly. I implemented a minimal version that had only limited use. I
was fully intending to add further bloat to have dynamically extendable
percpu areas at the end. Most of the early cpu_alloc patchsets already
include that code.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists