lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1302201418.30694.3.camel@dev.znau.edu.ua>
Date:	Thu, 07 Apr 2011 21:36:58 +0300
From:	George Kashperko <george@...u.edu.ua>
To:	Rafał Miłecki <zajec5@...il.com>
Cc:	Arnd Bergmann <arnd@...db.de>, Russell King <rmk@....linux.org.uk>,
	"linux-wireless@...r.kernel.org" <linux-wireless@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"b43-dev@...ts.infradead.org" <b43-dev@...ts.infradead.org>,
	Arend van Spriel <arend@...adcom.com>,
	linuxdriverproject <devel@...uxdriverproject.org>,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>,
	Larry Finger <Larry.Finger@...inger.net>
Subject: Re: [RFC][PATCH] bcmai: introduce AI driver


> W dniu 7 kwietnia 2011 09:54 użytkownik Michael Büsch <mb@...sch.de> napisał:
> > On Thu, 2011-04-07 at 02:54 +0200, Rafał Miłecki wrote:
> >> W dniu 7 kwietnia 2011 02:00 użytkownik George Kashperko
> >> <george@...u.edu.ua> napisał:
> >> > For PCI function description take a look at PCI specs or PCI
> >> > configuration space description (e. g.
> >> > http://en.wikipedia.org/wiki/PCI_configuration_space)
> >> >
> >> > Sorry for missleading short-ups, w11 - bcm80211 core, under two-head I
> >> > mean ssb/axi with two functional cores on same interconnect (like w11
> >> > +w11, not a lot of these exists I guess). Also there were some b43+b44
> >> > on single PCI ssb host and those where implemented as ssb interconnect
> >> > on multifunctional PCI host therefore providing separate access windows
> >> > for each function.
> >> >
> >> > Might I mussunderstood something (its late night here at my place) when
> >> > you where talking about using coreswitching involved for two drivers
> >> > therefore I remembered about those functions. Seems now you were talking
> >> > about chipcommon+b43 access sharing same window.
> >> >
> >> > As for core switching requirments for earlier SSB interconnects on PCI
> >> > hosts where there were no direct chipcommon access, that one can be
> >> > accomplished without spin_lock/mutex for b43 or b44 cores with proper
> >> > bus design.
> >> >
> >> > AXI doesn't need spinlocks/mutexes as both chipcommon and pci bridge are
> >> > available directly and b43 will be the only one requiring window access.
> >>
> >> Ahh, so while talking about 4 windows, I guess you counted fixes
> >> windows as well. That would be right, matching my knowledge.
> >>
> >> When asking question about amount of cores we may want to use
> >> simultaneously I didn't think about ChipCommon or PCIe. The real
> >> problem would be to support for example two 802.11 cores and one
> >> ethernet core at the same time. That gives us 3 cores while we have
> >> only 2 sliding windows.
> >
> > Would that really be a problem? Think of it. This combination
> > will only be available on embedded devices. But do we have windows
> > on embedded devices? I guess not. If AXI is similar to SSB, the MMIO
> > of all cores will always be mapped. So accesses can be done
> > without switch or lock.
> >
> > I do really think that engineers at broadcom are clever enough
> > to design a hardware that does not require expensive window sliding
> > all the time while operating.
Yes they are. As I've already mentioned earlier ssb/axi interconnects on
multifunctional pci bridges provide each function with separate sliding
windows, up to 4 functions on single pci bridge.

> 
> I also think so. When asking about amount of cores (non PCIe, non
> ChipCommon) which has to work simultaneously. I'm not sure if we will
> meet AI board with 2 cores (non PCIe, non ChipCommon) on PCIe host. I
> don't think we will see more than 2 cores (non PCIe, non ChipCommon)
> on PCIe host.
> 

Have nice day,
George


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ