lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 07 Apr 2011 21:50:40 +0300
From:	George Kashperko <george@...u.edu.ua>
To:	Arend van Spriel <arend@...adcom.com>
Cc:	Arnd Bergmann <arnd@...db.de>, Russell King <rmk@....linux.org.uk>,
	"linux-wireless@...r.kernel.org" <linux-wireless@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"b43-dev@...ts.infradead.org" <b43-dev@...ts.infradead.org>,
	linuxdriverproject <devel@...uxdriverproject.org>,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>,
	Larry Finger <Larry.Finger@...inger.net>
Subject: Re: [RFC][PATCH] bcmai: introduce AI driver


> On Thu, 07 Apr 2011 09:54:46 +0200, Michael Büsch <mb@...sch.de> wrote:
> 
> >> Ahh, so while talking about 4 windows, I guess you counted fixes
> >> windows as well. That would be right, matching my knowledge.
> >>
> >> When asking question about amount of cores we may want to use
> >> simultaneously I didn't think about ChipCommon or PCIe. The real
> >> problem would be to support for example two 802.11 cores and one
> >> ethernet core at the same time. That gives us 3 cores while we have
> >> only 2 sliding windows.
> >
> > Would that really be a problem? Think of it. This combination
> > will only be available on embedded devices. But do we have windows
> > on embedded devices? I guess not. If AXI is similar to SSB, the MMIO
> > of all cores will always be mapped. So accesses can be done
> > without switch or lock.
> 
> Agree. For embedded systems there is no need to switch cores. Each core  
> register space and wrapper register space is mapped. In the brcm80211 we  
> have the concept of fast versus slow host interface. The criteria for fast  
> host interface is based on following expression:
> 
> fast_host_bus = (host_bus_coretype == PCIE_CORE_ID) ||
> 	((host_bus_coretype == PCI_CORE_ID) && (host_bus_corerev >= 13))
> 
> If this is true, chipcommon and pci/pcie registers are accessed without  
> sliding the window using the fixed offsets Rafał mentioned earlier. The  
> BAR0 window size is 16KB.
Well, the major pci window managing concept in your code isn't really
fast switching but rather smart switching. chipcommon and pci bridge
core specific processing is well refined in appropriate routines each
looking like:
irqdisable switchcore(cc_or_pcie)
... code ...
switchcore(back) irqenable

Thus you always have pci window pointing to the function core and can
ioread/iowrite without spinlocking windowed accesses.

Yes, you use also "fast" switching for pci rev. >= 13 && pcie avoiding
irq(ena|disa) for such a configurations but, as I've mentioned earlier,
for axi this is rudimentary from pci rev. < 13 times as both pci bridge
and chipcommon are available simultaneously with fixed windows.

> 
> > I do really think that engineers at broadcom are clever enough
> > to design a hardware that does not require expensive window sliding
> > all the time while operating.
> >
> 
> If a bigger window is clever enough ;-)
> 
> Gr. AvS

Have nice day,
George


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ