lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 05 Mar 2007 17:19:58 +0800
From:	"Wu, Bryan" <bryan.wu@...log.com>
To:	Arnd Bergmann <arnd@...db.de>
Cc:	Aubrey Li <aubreylee@...il.com>, bryan.wu@...log.com,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH -mm 1/5] Blackfin: blackfin architecture patch update

On Mon, 2007-03-05 at 09:47 +0100, Arnd Bergmann wrote:
> On Monday 05 March 2007, Aubrey Li wrote:
> > On 3/4/07, Arnd Bergmann <arnd@...db.de> wrote:
> 
> > > In general, please put EXPORT_SYMBOL lines below the definition
> > > of the symbol itself. This list of exports should only be used
> > > for symbols that come from assembly files.
> > 
> > What is the right way to export symbol coming from c files?
>  
> As I said, below the symbol definition, like
> 
> int global_var;
> EXPORT_SYMBOL(global_var);
> 
> int global_function(void)
> {
> 	return 3;
> }
> EXPORT_SYMBOL(global_function);
> 

Got it, the rule will be followed.

> > > I'm curious: In your dual-core bf561, don't you actually need to implement
> > > something that maintains atomicity across cores rather than just across
> > > processes?
> > 
> > Yes, bf561 is a dual-core processor, but we are using only one core of
> > bf561 now.
> > IMHO, BF561 architecture was not designed for SMP or NUMA.
> 
> Interesting, so what is the intended use of the other core? Does the
> hardware have any way of supporting concurrency between the cores,
> other than sending interrupts between them?
> 

AFAIK, the dual-core in BF561 have their own L1 memory which can be
configured as cache respectively. So there is no hardware to help cache
coherence management, it is very hard to implement SMP, right?

Maybe NUMA is a solution, but it is not a wonderful solution.

In some application product, BF561 core A is running Linux kernel
+Applications while BF561 core B is just for some complicated
video/audio codec algorithm.

Any Linux multicore solution in BF561 situation is highly welcome.

> > > How does this fit in with the generic SPI code? Does it duplicate stuff
> > > from there, or do you use it?
> > 
> > We use our own. We have dma which can be used for SPI operations.
> 
> I just looked again at your code. My question was more directed at whether
> you use your own SPI abstraction layer instead of drivers/spi, which
> you fortunately don't. The piece I was missing however is the spi_bfin5xx.c
> driver, which was not part of this patch, though you seem to rely on it.
> Is that already part of the -mm kernel?
> 
> 	Arnd <><

I will send out the SPI patch ASAP. and your signature is very
interesting, it looks like a fish -:)

Another question: when is the merge point from -mm to linus mainline, is
it the same as the merge window after 2.6.21 released?

Thanks again
-Bryan Wu
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists