[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <499E1E99.6030508@kernel.org>
Date: Fri, 20 Feb 2009 12:08:09 +0900
From: Tejun Heo <tj@...nel.org>
To: Rusty Russell <rusty@...tcorp.com.au>
CC: Ingo Molnar <mingo@...e.hu>, tglx@...utronix.de, x86@...nel.org,
linux-kernel@...r.kernel.org, hpa@...or.com, jeremy@...p.org,
cpw@....com
Subject: Re: [PATCHSET x86/core/percpu] implement dynamic percpu allocator
Rusty Russell wrote:
>>>> Rusty, if the fixes are fine with you i can put those two
>>>> commits into tip/core/urgent straight away, the full string of
>>>> 10 commits into tip/core/percpu and thus we'd avoid duplicate
>>>> (or even conflicting) commits.
>>> No, the second one is not .29 material; it's a nice, but
>>> theoretical, fix.
>> Can it never trigger?
>
> Actually, checked again. It's not even necessary AFAICT (tho a comment
> would be nice):
>
> for (i = 0; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) {
> /* Extra for alignment requirement. */
> extra = ALIGN((unsigned long)ptr, align) - (unsigned long)ptr;
> BUG_ON(i == 0 && extra != 0);
>
> if (pcpu_size[i] < 0 || pcpu_size[i] < extra + size)
> continue;
>
> /* Transfer extra to previous block. */
> if (pcpu_size[i-1] < 0)
> pcpu_size[i-1] -= extra;
> else
> pcpu_size[i-1] += extra;
>
> pcpu_size[0] is *always* negative: it's marked allocated at initialization
> (it's the static per-cpu allocations).
>
> Sorry I didn't examine more closely,
Ah... okay. Right. I took the code and used it in the chunk area
allocator where 0 isn't guaranteed to be occupied and saw the problem
triggering and then assumed the modalloc allocator shared the same
problem. So, unnecessary fix but I think it really needs some
explanation.
What to do about #tj-percpu? Ingo, do you want me to rebase tree sans
the second one?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists