[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902192244.15055.rusty@rustcorp.com.au>
Date: Thu, 19 Feb 2009 22:44:14 +1030
From: Rusty Russell <rusty@...tcorp.com.au>
To: Ingo Molnar <mingo@...e.hu>
Cc: Tejun Heo <tj@...nel.org>, tglx@...utronix.de, x86@...nel.org,
linux-kernel@...r.kernel.org, hpa@...or.com, jeremy@...p.org,
cpw@....com
Subject: Re: [PATCHSET x86/core/percpu] implement dynamic percpu allocator
On Thursday 19 February 2009 21:36:31 Ingo Molnar wrote:
>
> * Rusty Russell <rusty@...tcorp.com.au> wrote:
>
> > On Thursday 19 February 2009 00:13:31 Ingo Molnar wrote:
> > >
> > > * Tejun Heo <tj@...nel.org> wrote:
> > >
> > > > 0001-vmalloc-call-flush_cache_vunmap-from-unmap_kernel.patch
> > > > 0002-module-fix-out-of-range-memory-access.patch
> > >
> > > Hm, these two seem to be .29 material too, agreed?
> > >
> > > Rusty, if the fixes are fine with you i can put those two
> > > commits into tip/core/urgent straight away, the full string of
> > > 10 commits into tip/core/percpu and thus we'd avoid duplicate
> > > (or even conflicting) commits.
> >
> > No, the second one is not .29 material; it's a nice, but
> > theoretical, fix.
>
> Can it never trigger?
Actually, checked again. It's not even necessary AFAICT (tho a comment
would be nice):
for (i = 0; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) {
/* Extra for alignment requirement. */
extra = ALIGN((unsigned long)ptr, align) - (unsigned long)ptr;
BUG_ON(i == 0 && extra != 0);
if (pcpu_size[i] < 0 || pcpu_size[i] < extra + size)
continue;
/* Transfer extra to previous block. */
if (pcpu_size[i-1] < 0)
pcpu_size[i-1] -= extra;
else
pcpu_size[i-1] += extra;
pcpu_size[0] is *always* negative: it's marked allocated at initialization
(it's the static per-cpu allocations).
Sorry I didn't examine more closely,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists