[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <685b7a6a-d122-b79a-93e7-4227eaa4e4e9@fnarfbargle.com>
Date: Tue, 9 Aug 2022 19:03:14 +0800
From: Brad Campbell <lists2009@...rfbargle.com>
To: Mika Westerberg <mika.westerberg@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: Apple Thunderbolt Display chaining
G'day Mika,
On 9/8/22 18:55, Mika Westerberg wrote:
> Hi,
>
> On Tue, Aug 09, 2022 at 06:40:54PM +0800, Brad Campbell wrote:
>> G'day Mika,
>>
>>
>> On 9/8/22 18:23, Mika Westerberg wrote:
>>> Hi,
>>>
>>> On Mon, Aug 08, 2022 at 09:27:24PM +0800, Brad Campbell wrote:
>>>> If I don't authorize the PCIe tunnels and just leave the DP enabled it
>>>> works fine also.
>>>
>>> But you say that it fails on boot when the driver discovers the tunnels,
>>> right? So there is really nothing to authorize (they should be already
>>> "authorized" by the boot firmware).
>>>
>>> If I understand correctly this is how it reproduces (the simplest):
>>>
>>> 1. Connect a single Apple TB1 display to the system
>>> 2. Boot it up
>>> 3. Wait a while and it hangs
>>>
>>> If this is the case, then the driver certainly is not creating any
>>> PCIe tunnels itself unless there is a bug somewhere.
>>>
>>> An additional question, does it reproduce with either TB1 display
>>> connected or just with specific TB1 display?
>>>
>>
>> No, I've not been clear enough, I'm sorry. I've re-read what I've written below and
>> I'm still not sure I'm clear enough.
>>
>> The firmware never sets anything up.
>>
>> When I cold boot the machine (from power on), the thunderbolt displays and tunnels
>> remain dark until linux initializes the thunderbolt driver the first time.
>>
>> If I compile the thunderbolt driver into the kernel, or let the initramfs load it
>> the displays come up, all PCIe tunnels are established and everything works.
>>
>> When I reboot the machine (reset button or warm boot), the firmware continues to
>> do nothing and all the tunnels remain in place. The machine dies when the thunderbolt
>> driver is loaded for a second time.
>>
>> That might be a reset/warm boot with it compiled in or loaded from iniramfs.
>> It may also be me loading it from the command line after booting with it as a
>> module and blacklisted.
>>
>> The problem comes about when the thunderbolt module is loaded while the PCIe tunnels
>> are already established.
>>
>> To reproduce in the easiest manner I compile the thunderbolt driver as a module and
>> blacklist it. This prevents it from auto-loading.
>>
>> I cold boot the machine, let it boot completely then modprobe thunderbolt and authorize
>> the tunnels. I then warm boot which lets the kernel detect and init the DP displays
>> and detect/configure all the PCIe devices. The thunderbolt driver is not loaded.
>>
>> The machine comes up, all tunnels are established and all devices work.
>>
>> If I then modprobe the thunderbolt driver, things break.
>>
>> This is the hack in my boot script :
>>
>> # Spark up thunderbolt
>> if [ -z "`grep notb /proc/cmdline`" -a -z "`lsusb | grep '05ac:9227'`" ] ; then
>> modprobe thunderbolt
>> sleep 1
>> echo 1 > /sys/bus/thunderbolt/devices/0-3/authorized
>> echo 1 > /sys/bus/thunderbolt/devices/0-303/authorized
>> reboot
>> fi
>
> Thanks for the clarification! How about on macOS side, does it work (I
> would expect yes)?
>
It did work flawlessly in MacOS, but as the GPU turned up its toes I can't really test it anymore.
The Mac EFI did odd things with the Thunderbolt tunnels, and due to the dying GPU I couldn't
warm boot it in Linux anyway. Every reboot had to be a power cycle or it'd hang in the EFI.
Regards,
Brad
Powered by blists - more mailing lists