[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20080731103002.GE488@elte.hu>
Date: Thu, 31 Jul 2008 12:30:02 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Rusty Russell <rusty@...tcorp.com.au>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Travis <travis@....com>
Subject: Re: [git pull] cpus4096 fixes
* Rusty Russell <rusty@...tcorp.com.au> wrote:
> On Monday 28 July 2008 18:16:39 Ingo Molnar wrote:
> > * Rusty Russell <rusty@...tcorp.com.au> wrote:
> > > Mike: I now think the right long-term answer is Linus' dense cpumap
> > > idea + a convenience allocator for cpumasks. We sweep the kernel for
> > > all on-stack vars and replace them with one or the other. Thoughts?
> >
> > The dense cpumap for constant cpumasks is OK as it's clever, compact and
> > static.
> >
> > All-dynamic allocator for on-stack cpumasks ... is a less obvious
> > choice.
>
> Sorry, I was unclear. "long-term" == "more than 4096 CPUs", since I
> thought that was Mike's aim. If we only want to hack up 4k CPUS and
> stop, then I understand the current approach.
>
> If we want huge cpu numbers, I think cpumask_alloc/free gives the
> clearest code. So our approach is backwards: let's do that *then* put
> ugly hacks in if it's really too slow.
My only worry with that principle is that the "does it really hurt" fact
is seldom really provable on a standalone basis.
Creeping bloat and creeping slowdowns are the hardest to catch. A cycle
here, a byte there, and it mounts up quickly. Coupled with faster but
less deterministic CPUs it's pretty hard to prove a slowdown even with
very careful profiling. We only catch the truly egregious cases that
manage to shine through the general haze of other changes - and the haze
is thickening every year.
I dont fundamentally disagree with turning cpumask into standalone
objects on large machines though. I just think that our profiling
methods are simply not good enough at the moment to truly trace small
slowdowns back to their source commits fast enough. So the "we wont do
it if it hurts" notion, while i agree with it, does not fulfill its
promise in practice.
[ We might need something like a simulated reference CPU where various
"reference" performance tests are 100% repeatable and slowdowns are
thus 100% provable and bisectable. That CPU would simulate a cache and
would be modern in most aspects, etc. - just that the results it
produces would be fully deterministic in virtual time.
Problem is, hw is not fast enough for that kind of simulation yet IMO
(tools exist but it would not be fun at all to work in such a
simulated environment in practice - hence kernel developers would
generally ignore it) - so there will be a few years of uncertainty
still. ]
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists