[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090310094810.2ffe1f63.akpm@linux-foundation.org>
Date: Tue, 10 Mar 2009 09:48:10 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mike Frysinger <vapier.adi@...il.com>
Cc: gyang <graf.yang@...log.com>, Bryan Wu <cooloney@...nel.org>,
alan@...rguk.ukuu.org.uk, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 02/18] Blackfin Serial Driver: use barrier instead of
cpu_relax for Blackfin SMP like patch
On Tue, 10 Mar 2009 06:25:08 -0400 Mike Frysinger <vapier.adi@...il.com> wrote:
> On Tue, Mar 10, 2009 at 06:07, gyang wrote:
> > On Fri, 2009-03-06 at 14:37 -0800, Andrew Morton wrote:
> >> On Fri, 6 Mar 2009 14:42:44 +0800
> >> Bryan Wu <cooloney@...nel.org> wrote:
> >>
> >> > From: Graf Yang <graf.yang@...log.com>
> >> >
> >> > We are making a SMP like patch to blackfin, cpu_relax() is replaced by a
> >> > data cache flush function which will count it to a per-cpu counter.
> >> > If this serial function is called too early, the per-cpu data area have
> >> > not been initialized, this call will cause crash.
> >>
> >> That's a bug in blackfin architecture support. The kernel should be
> >> able to call cpu_relax() at any time, surely. It's a very low-level
> >> and simple thing.
> >>
> >> > So we'd like to use barrier() instead of cpu_relax().
> >> >
> >>
> >> barrier() is purely a compiler concept. We might as well just remove
> >> the cpu_relax() altogether.
> >
> > Do you mean remove cpu_relax(), and either not add barrier() here?
>
> afaik, early printk all runs before SMP is setup, so having it be a
> 100% busy wait is fine
No, blackfin is busted, please fix this bug in blackfin core.
What happens if core kernel code decides to run cpu_relax() prior to
initialising per-cpu data?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists