[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1004050751500.8323@i5.linux-foundation.org>
Date: Mon, 5 Apr 2010 07:56:56 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Pavel Machek <pavel@....cz>
cc: Jason Wessel <jason.wessel@...driver.com>,
Will Deacon <will.deacon@....com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
kgdb-bugreport@...ts.sourceforge.net, linux-arm@...r.kernel.org,
Russell King - ARM Linux <linux@....linux.org.uk>
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers
On Mon, 5 Apr 2010, Pavel Machek wrote:
>
> And this is valid (but ugly and not optimal) kernel code:
>
> kernel/sched.c- while (task_is_waking(p))
> kernel/sched.c: asm volatile("" :: "memory");
No. We would consider such code buggy.
That said, you're right that such code would exist. But if it were to
exist and cause lock-ups, at least I would consider it a simple and
outright bug, and that the proper fix would be to just replace the asm
with cpu_relax().
> ...so I don't think inserting smp_mb() into cpu_relax() and udelay()
> and similar can ever fix the problem fully.
See above.
> Run smp_mb() from periodic interrupt?
Doesn't help - it's quite valid to do things like this in irq-disabled
code, although it is hopefully very very rare.
In particular, I suspect the kgdb use _is_ interrupts disabled, and is why
the ARM people even noticed (the normal cases would break out of the loop
exactly because an interrupt occurred, and an interrupt is probably
already enough to make the issue go away).
And please do not confuse this with smp_mb() - this is not about the Linux
notion of a memory barrier, this is about whatever per-arch oddity that
makes changes not be noticed (ie caches may be _coherent_, but they are
not "timely").
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists