[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080616162212.27a8c119@linux360.ro>
Date: Mon, 16 Jun 2008 16:22:12 +0300
From: Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>
To: Mathieu Desnoyers <compudj@...stal.dyndns.org>
Cc: Tom Zanussi <tzanussi@...il.com>, penberg@...helsinki.fi,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
righi.andrea@...il.com
Subject: Re: [PATCH 2/3] relay: Fix race condition which occurs when reading
across CPUs.
On Mon, 16 Jun 2008 08:22:49 -0400
Mathieu Desnoyers <compudj@...stal.dyndns.org> wrote:
> Hi Eduard,
Hi.
> Two objections against this. First, taking a spinlock _is_ slow in SMP
> because it involves synchronized atomic operations.
In any case, if we use relay in a hot path, we are doing debugging, so
a couple of atomic operations won't be a big problem. Along with
setting affinity, this shouldn't be a problem.
> Second, Adding a spinlock to the relay write side is bad since it
> opens the door to deadly embrace between a trap handler and normal
> kernel code both running tracing code.
We disable IRQs. However, interrupts which cannot be disabled (AFAIK
this includes SMM, traps, probably NMIs (but they can be disabled at
least on x86)) either don't use kernel code (SMM) or run through safe
paths. But this is the case with all spinlocks used in interrupt context.
The only other option would be to try the lock and let relay_write fail
sometimes.
> Unless really-really needed, which does not seem to be the case, I
> would strongly recommend not merging this patch.
My question was if affinity setting offers any guarantees. If it does,
then maybe we could do without this patch.
BTW, I'm not saying affinity setting doesn't help. It helps
performance-wise and I'll use it, but it should also offer correctness.
> Mathieu
Cheers,
Eduard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists