lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.01.0908251127460.3218@localhost.localdomain>
Date:	Tue, 25 Aug 2009 11:42:10 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Yinghai Lu <yinghai@...nel.org>
cc:	bugzilla-daemon@...zilla.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Ricardo Jorge da Fonseca Marques Ferreira 
	<storm@...49152.net>, cebbert@...hat.com,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [Bug 13940] iwlagn and sky2 stopped working, ACPI-related



On Tue, 25 Aug 2009, Yinghai Lu wrote:
> 
> please try to attached patch, that will increae alignment from 32M to 64M.

Hmm. That may indeed fix the problem, because we have:

 - working-2.6.30.log:

	Allocating PCI resources starting at b8000000 (gap: b6000000:4a000000)

 - not-working-2.6.31.log:

	Allocating PCI resources starting at b6000000 (gap: b6000000:4a000000)

HOWEVER. We also have:

 - working-2.6.31_acpi=off.log:

	Allocating PCI resources starting at b6000000 (gap: b6000000:4a000000)

ie it really does seem to be ACPI-related somehow: starting PCI 
allocations at that b6000000 address works perfectly fine if ACPI is not 
enabled.

In the not-working version, we end up getting:

[    1.408588] pci 0000:00:1c.4: PCI bridge, secondary bus 0000:07
[    1.408593] pci 0000:00:1c.4:   IO window: 0x2000-0x2fff
[    1.408600] pci 0000:00:1c.4:   MEM window: 0xb6000000-0xb60fffff
[    1.408606] pci 0000:00:1c.4:   PREFETCH window: disabled
[    1.408623] pci 0000:00:1c.5: PCI bridge, secondary bus 0000:08
[    1.408626] pci 0000:00:1c.5:   IO window: disabled
[    1.408633] pci 0000:00:1c.5:   MEM window: 0xb6100000-0xb61fffff
[    1.408639] pci 0000:00:1c.5:   PREFETCH window: disabled


while in the working version we have:

 - ACPI off - looks like a BIOS allocated memory window:

	[    0.290854] pci 0000:00:1c.4: PCI bridge, secondary bus 0000:07
	[    0.290854] pci 0000:00:1c.4:   IO window: 0x3000-0x3fff
	[    0.290854] pci 0000:00:1c.4:   MEM window: 0xf4500000-0xf45fffff
	[    0.290854] pci 0000:00:1c.4:   PREFETCH window: disabled
	[    0.290854] pci 0000:00:1c.5: PCI bridge, secondary bus 0000:08
	[    0.290854] pci 0000:00:1c.5:   IO window: disabled
	[    0.290854] pci 0000:00:1c.5:   MEM window: 0xf4600000-0xf46fffff
	[    0.290854] pci 0000:00:1c.5:   PREFETCH window: disabled

 - ACPI on - we allocated the memory window, but at 0xb8000000+, rather 
   than directly after end-of-RAM:

	[    0.842970] pci 0000:00:1c.4: PCI bridge, secondary bus 0000:07
	[    0.842975] pci 0000:00:1c.4:   IO window: 0x2000-0x2fff
	[    0.842983] pci 0000:00:1c.4:   MEM window: 0xb8000000-0xb80fffff
	[    0.842989] pci 0000:00:1c.4:   PREFETCH window: disabled
	[    0.843012] pci 0000:00:1c.5: PCI bridge, secondary bus 0000:08
	[    0.843016] pci 0000:00:1c.5:   IO window: disabled
	[    0.843023] pci 0000:00:1c.5:   MEM window: 0xb8100000-0xb81fffff
	[    0.843029] pci 0000:00:1c.5:   PREFETCH window: disabled

ie for some reason ACPI caused that bus to be re-allocated, and 
re-allocating it right after the memory window doesn't work.

Crazy.

I wonder what is hiding at that 0xb6000000 address. And while I think that 
in this case rounding up to 64MB will fix it, I worry that our old model 
(of never starting directly after RAM, even if it was aligned) may not 
have been safer.

		Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ