[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a781481a0707091224s3fb1a2acr6d3ccce091480f61@mail.gmail.com>
Date: Tue, 10 Jul 2007 00:54:26 +0530
From: "Satyam Sharma" <satyam.sharma@...il.com>
To: "Avi Kivity" <avi@...ranet.com>
Cc: "Andi Kleen" <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
KVM <kvm-devel@...ts.sourceforge.net>,
"Andrew Morton" <akpm@...ux-foundation.org>
Subject: Re: [PATCH 17/20] SMP: Implement on_cpu()
Hi,
ISTR participating in a similar discussion some time back, but ...
anyway, I don't like the change in semantics of smp_call_function()
being proposed here *at* *all* ...
On 7/9/07, Avi Kivity <avi@...ranet.com> wrote:
> >> This defines on_cpu() which is similar to smp_call_function_single()
> >> except that it works if cpu happens to be the current cpu. Can also be
> >> seen as a complement to on_each_cpu() (which also doesn't treat the
> >> current cpu specially).
I like the patch being originally proposed here. For the sake of
correctness, it is _compulsory_ to wrap a get_cpu() / put_cpu()
pair around calls to smp_call_function{_single} in any case,
so it makes sense to provide a function that does this wrapping
in itself, to reduce likelihood of bugs and also get rid of open-coding.
[ In fact I don't like the fact that for the UP case you're simply
executing the function locally without even checking that the
cpu argument passed is indeed == 0. We had discussed this
previously and you did mention that cpu == 0 for !SMP is
assumed to be true, but I don't see what we lose by asserting
that "trivial assumption" either. ]
On 7/9/07, Andi Kleen <andi@...stfloor.org> wrote:
> [...]
> on_each_cpu() was imho always a mistake. It would have been better
> to just fix smp_call_function() directly
I'm not sure what you mean by "fix" here, but if you're proposing
that we change smp_call_function() semantics to _include_ the
current CPU (and just run the given function locally also along
with the others -- and hence get rid of on_each_cpu) then I'm sorry
but I'll have to *violently* disagree with that. Please remember that
the current CPU _must_ be treated specially in a whole *lot* of
usage scenarios ...
Take smp_send_stop() for instance. We need to send a suicide
function:
for (;;)
halt();
to all _other_ CPUs *only* -- it would be *insane* to include ourselves
(current CPU, current thread) and just execute this suicide function
_locally_ *in current thread* too.
OTOH, there are plenty of situations where we actually _want_ to
get some function executed on *each* CPU (_including_ the current
local CPU that is executing that thread) -- naturally on_each_cpu()
would make sense for those cases.
Both have their purposes -- both must co-exist.
On 7/9/07, Andi Kleen <andi@...stfloor.org> wrote:
> > I think it would be better to fix smp_call_function_single to just
> > handle this case transparently. There aren't that many callers yet
> > because it is
> > fairly new.
Take the same example here -- let's say we want to send a
"for (;;) ;" kind of function to a specified CPU. Now let's say
by the time we've called smp_call_function_single() on that
target CPU, we're preempted out and then get rescheduled
on the target CPU itself. There, we begin executing the
smp_call_function_single() (as modified by Avi here with your
proposed changed semantics) and notice that we've landed
on the target CPU itself, execute the suicidal function
_locally_ *in current thread* itself, and ... well, I hope you
get the picture.
So my opinion is to go with the get_cpu() / put_cpu() wrapper
Avi is proposing here and keep smp_call_function{_single}
semantics unchanged. [ Also please remember that for
*correctness*, preemption needs to be disabled by the
_caller_ of smp_call_function{_single} functions, doing so
inside them is insufficient. ]
Satyam
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists