[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47C863CB.404@qualcomm.com>
Date: Fri, 29 Feb 2008 11:58:03 -0800
From: Max Krasnyanskiy <maxk@...lcomm.com>
To: Ingo Molnar <mingo@...e.hu>
CC: Jason Baron <jbaron@...hat.com>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
Rusty Russell <rusty@...tcorp.com.au>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [patch 1/2] add ALL_CPUS option to stop_machine_run()
Ingo Molnar wrote:
> * Max Krasnyanskiy <maxk@...lcomm.com> wrote:
>
>> btw Being an RT guy you do not think that stop machine is evil ? [...]
>
> i'm not "an RT guy", -rt is just one of the many projects i've been
> involved with.
>
> and no, i dont think stop machine is "evil" - it's currently the best
> way to do certain things. If you can solve it better then sure, i'm
> awaiting your patches - but the only patch i saw from you so far was the
> one that turned off stop-machine for isolated cpus - which was
> incredibly broken and ignored the problem altogether.
Ingo, I got it. My patch was a hack. Moving on. Seriously there is no need to
say it ten thousand times ;-).
You clipped the part where I elaborated what exactly is evil about the stop
machine. I clearly said that yes for some things there is just no other way
but in general we should _try_ to avoid it. Note that I did not say "we must"
I'm saying we should try.
> Right now the answer is: "if you want to do hard RT then avoid doing
> things like loading modules". (which you should avoid while doing
> hard-RT anyway)
That's just not practical. Sure you can have some kind of stripped down
machine but then you loose a lot of flexibility. Again "should" is the keyword
here. For a lot of workloads hard-RT has to coexist with a bunch of other things.
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists