lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1pr86x9qj.fsf@fess.ebiederm.org>
Date:	Thu, 29 Oct 2009 12:48:04 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	Kenji Kaneshige <kaneshige.kenji@...fujitsu.com>,
	Jesse Barnes <jbarnes@...tuousgeek.org>,
	"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-pci\@vger.kernel.org" <linux-pci@...r.kernel.org>,
	Alex Chiang <achiang@...com>,
	Ivan Kokshaysky <ink@...assic.park.msu.ru>,
	Bjorn Helgaas <bjorn.helgaas@...com>
Subject: Re: [PATCH] pci: pciehp update the slot bridge res to get big range for pcie devices

Yinghai Lu <yinghai@...nel.org> writes:

> Eric W. Biederman wrote:
>> Yinghai Lu <yinghai@...nel.org> writes:
>> 
>>> Eric W. Biederman wrote:
>>>> Yinghai Lu <yinghai@...nel.org> writes:
>>>>> after closing look up the code, it looks it will not break your setup.
>>>>>
>>>>> 1. before the patches:
>>>>> a. when master card is inserted, all bridge in that card will get assigned with min_size
>>>>> b. when new cards is inserted to those slots in master card, will get assigned in the bridge size.
>>>>>
>>>>> 2. after the patches: v5
>>>>> a. booted up, all leaf bridge mmio get clearred.
>>>>> b. when master card is inserted, all bridge in that card will get assigned with min_size, and master bridge will be sum of them
>>>>> c. when new cards is inserted to those slots in master card, will get assigned in the bridge size.
>>>>>
>>>>> can you check those two patches in your setup to verify it?
>>>> I have a much simpler case I will break, as I tried something similar by accident.
>>> which kernel version?
>>>> AMD cpu MCP55 with one pcie port setup as hotplug.
>>>> The system only has 2GB of RAM.  So plenty of space for pcie devices.
>>> one or two ht chains?
>> 
>> One chain.
>> 
>>> do you still have lspci -tv with it?
>>>
>>>> If the firmware assigns nothing and linux at boot time assigns the pci mmio space:
>>>> Reads from the bar of the hotplugged device work
>>>> Writes to the bar of the hotplugged device, cause further writes to go to lala land.
>>>>
>>>> So I had to have the firmware make the assignment, because only it knows the
>>>> details of the hidden AMD bar registers for each hypertransport chain etc.
>>> that mean kernel doesn't get peer root bus res probed properly
>> 
>> How do you do that without having drivers for the peer root bus?
>
> we have amd_bus.c to handle amd k8 system with two chains. but one chain is skipped.
> (wonder if need to reenable that for one chain k8 system)

I was running a 32bit kernel so this didn't kick in.  That might have
helped.  At least as far as recognizing the resources weren't properly
routed.  If we don't setup the infrastructure so that we can reprogram
those resources I'm not certain how much good it will do in general.

> another intel_bus.c is on the way to 2.6.33.
>
> when use_crs is used, those info from pci conf space is not used but just print out for check if _CRS is right or not.

If enough space is routed and we get accurate information I am certain
that is fine.  I am still worried about the change in policy though.

Only rerouting things when there is a need gives us the best chance of
working everywhere.  Freeing unused resources on hotplug ports before
we plug in a device scares me, because we do something that should
but doesn't we reallocate them.  If there is simply not enough room
we can do something different.

Eric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ