[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EDDCD6D.1010907@jp.fujitsu.com>
Date: Tue, 06 Dec 2011 17:08:13 +0900
From: Kenji Kaneshige <kaneshige.kenji@...fujitsu.com>
To: Jesse Barnes <jbarnes@...tuousgeek.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
CC: linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [git pull] PCI fixes
(2011/12/06 4:22), Jesse Barnes wrote:
> On Wed, 23 Nov 2011 15:02:01 -0800
> Linus Torvalds<torvalds@...ux-foundation.org> wrote:
>
>> On Wed, Nov 23, 2011 at 2:44 PM, Jesse Barnes<jbarnes@...tuousgeek.org> wrote:
>>>
>>> Nothing too exciting this time, mostly some minor fixes for things
>>> normal people don't really hit. Happy Thanksgiving.
>>
>> Ugh. This looks bogus:
>>
>>> Kenji Kaneshige (2):
>>> PCI: pciehp: wait 1000 ms before Link Training check
>>
>> Look at that patch more closely. After the patch, the code looks like this:
>>
>>
>> if (ctrl->link_active_reporting)
>> pcie_wait_link_active(ctrl);
>> else
>> msleep(1000);
>>
>> + /*
>> + * Need to wait for 1000 ms after Data Link Layer Link Active
>> + * (DLLLA) bit reads 1b before sending configuration request.
>> + * We need it before checking Link Training (LT) bit becuase
>> + * LT is still set even after DLLLA bit is set on some platform.
>> + */
>> + msleep(1000);
>>
>> and I'm pretty sure you should remove the "else msleep(1000)" there.
>> Doing the 1s wait *twice* seems entirely bogus, even if you are
>> missing link_active_reporting. No?
>>
>> I pulled it since I can't test it, but it really smells fishy to me.
>
> Sure looks like it... Kenji-san, you went back and forth on this one a
> little, can you confirm (and preferably test)?
>
> Thanks,
I'm very sorry for the long delay.
Each 1000 ms wait has defferent meaning. But as Linus pointed out, the
first one ("else msleep(1000)") can be removed. I'll work on this.
Though the patch is not clean, I tested it from the regression point of
view. And Yinghai also confirmed it fix his problem.
Here are explanation for each 1000 ms with history.
>> if (ctrl->link_active_reporting)
>> pcie_wait_link_active(ctrl);
>> else
>> msleep(1000);
This chunk is to wait for Link Active. After PCIe specification 1.1,
downstream port with hotplug slots needs to support link active reporting.
But older specification PCIe 1.0a doesn't have link active reporting. The
"else msleep(1000)" is for PCIe 1.0a. It wait for 1 sec instead of polling
data link layer link active bit. In PCIe 1.0a spec, the bit corresponding
to the data link layer link active bit in PCIe 1.1 or later is defined
reserved zero. So as Linus pointed out, we can remove "else msleep(1000)".
The reason why I added "if (crtl->link_active_reporting)" here was that I
didn't want to change the behavior for old hot-plug controller (PCIe 1.0a),
but I think it was too much.
>> + /*
>> + * Need to wait for 1000 ms after Data Link Layer Link Active
>> + * (DLLLA) bit reads 1b before sending configuration request.
>> + * We need it before checking Link Training (LT) bit becuase
>> + * LT is still set even after DLLLA bit is set on some platform.
>> + */
>> + msleep(1000);
In the past, pciehp waits 100 ms instead of 1 sec after checking the link
state. This 100 ms was based on PCIe spec. But we encountered the problem
that configuration access to some HBA card doesn't return the proper value.
To fix this problem, we replaced this 100 ms wait with 1 sec wait. This is
based on the PCIe description "software must allow 1 second after the Data
Link Layer Link Active bit reads 1b before it is permitted to determine
that a hot plugged device which fails to return a Successful Completion for
a Valid Configuration Request is a broken device". Then, another problem
was reported by Yinghai Lu. The problem is that LT bit is still set even
after DLLLA bit is set on his platform. To fix the problem 1 sec wait was
moved to before LT check, which was placed after LT check.
Regards,
Kenji Kaneshige
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists