[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091112110741.GC24684@elte.hu>
Date: Thu, 12 Nov 2009 12:07:41 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Tejun Heo <tj@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Yinghai Lu <yhlu.kernel@...il.com>
Subject: Re: [GIT PULL] percpu fixes for 2.6.32-rc6
* Tejun Heo <tj@...nel.org> wrote:
> > if (reserved && pcpu_reserved_chunk) {
> >
> > into a helper inline function, something like __pcpu_alloc_reserved().
> >
> > It's a rare special case anyway. It could be changed to return with the
> > pcpu_lock always taken, so the above branch would look like this:
> >
> > if (unlikely(reserved)) {
> > off = __pcpu_alloc_reserved(&chunk, size, align, &err);
> > if (off < 0)
> > goto fail_unlock;
> > goto area_found;
> > }
> >
> > Which is a cleaner flow IMO, and which simplifes pcpu_alloc().
>
> Hmmm... The thing is that the nesting isn't that deep there [...]
Well, the pcpu_alloc() function is 115 lines which is a bit long. It
does 2-3 things while a function should try to do one thing.
Putting the reserved allocation into a separate function also makes the
'main' path of logic more visible and obstructed less by rare details.
The indentation i pinpointed is 4 levels deep:
err = "failed to extend area map of "
"reserved chunk";
which is a bit too much IMO - the code starts in the middle of the
screen, there's barely any space to do anything meaningful.
But there's other line wrap artifacts as well further down:
if (pcpu_extend_area_map(chunk,
new_alloc) < 0) {
But ... there's no hard rules here and i've seen functions where 4
levels of indentation were just ok. Anyway, i just gave you my opinion,
and i'm definitely more on the nitpicky side of the code quality
equilibrium, YMMV.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists