lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Mar 2007 13:26:52 -0700
From:	Greg KH <gregkh@...e.de>
To:	Daniel Yeisley <dan.yeisley@...sys.com>
Cc:	linux-kernel@...r.kernel.org, akpm@...l.org
Subject: Re: [PATCH] I/O space boot parameter

On Tue, Mar 20, 2007 at 01:25:38PM -0400, Daniel Yeisley wrote:
> On Tue, 2007-03-20 at 11:00 -0700, Greg KH wrote:
> > On Tue, Mar 20, 2007 at 12:18:24PM -0400, Daniel Yeisley wrote:
> > > It has been mentioned before that large systems with a lot of PCI buses
> > > have issues with the 64k I/O space limit.  The ES7000 has a BIOS option
> > > to either assign I/O space to all adapters, or only to those that need
> > > it.  A list of supported adapters that don't need it is kept in the
> > > BIOS.  When this option is used, the kernel sees the BARs on the
> > > adapters and still tries to assign I/O space (until it runs out).  I've
> > > written a patch to implement a boot parameter that tells the kernel not
> > > to assign I/O space if the BIOS hasn't.  
> > 
> > How prelevant are machines like this?  And why are the BARs on these
> > devices wrong?
> > 
> I don't have any sales numbers, but I can tell you that our current
> systems can have up to 64 PCI buses.  
> 
> I've been working with Emulex cards, and my understanding is that the
> BARs on the devices aren't wrong, but we can't allocate 4k of I/O space
> for each one.  So we maintain a list in the BIOS of devices that don't
> actually need I/O space and then don't assign it.  I've tested an a
> x86_64 system with 20+ adapters and saw all the disks attached without
> any problems.

Ah.  Others are working on providing a fix for this too, but it is being
done in the drivers themselves, not in the pci core.  Look in the
linux-pci mailing list archives for those patches (I don't think they
every went into mainline for some reason, but I might be wrong...)

I suggest you work with those developers, as they have the same issue
that you are trying to solve here.

thanks,

greg k-h
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ