[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92ee253f-bf6a-481a-acc2-daf26d268395@riscstar.com>
Date: Fri, 17 Oct 2025 11:21:08 -0500
From: Alex Elder <elder@...cstar.com>
To: robh@...nel.org, krzk+dt@...nel.org, conor+dt@...nel.org,
bhelgaas@...gle.com, lpieralisi@...nel.org, kwilczynski@...nel.org,
mani@...nel.org, vkoul@...nel.org, kishon@...nel.org, dlan@...too.org,
guodong@...cstar.com, pjw@...nel.org, palmer@...belt.com,
aou@...s.berkeley.edu, alex@...ti.fr, p.zabel@...gutronix.de,
christian.bruel@...s.st.com, shradha.t@...sung.com,
krishna.chundru@....qualcomm.com, qiang.yu@....qualcomm.com,
namcao@...utronix.de, thippeswamy.havalige@....com, inochiama@...il.com,
devicetree@...r.kernel.org, linux-pci@...r.kernel.org,
linux-phy@...ts.infradead.org, spacemit@...ts.linux.dev,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/7] Introduce SpacemiT K1 PCIe phy and host controller
On 10/16/25 11:47 AM, Aurelien Jarno wrote:
> Hi Alex,
>
> On 2025-10-13 10:35, Alex Elder wrote:
>> This series introduces a PHY driver and a PCIe driver to support PCIe
>> on the SpacemiT K1 SoC. The PCIe implementation is derived from a
>> Synopsys DesignWare PCIe IP. The PHY driver supports one combination
>> PCIe/USB PHY as well as two PCIe-only PHYs. The combo PHY port uses
>> one PCIe lane, and the other two ports each have two lanes. All PCIe
>> ports operate at 5 GT/second.
>>
>> The PCIe PHYs must be configured using a value that can only be
>> determined using the combo PHY, operating in PCIe mode. To allow
>> that PHY to be used for USB, the calibration step is performed by
>> the PHY driver automatically at probe time. Once this step is done,
>> the PHY can be used for either PCIe or USB.
>>
>> Version 2 of this series incorporates suggestions made during the
>> review of version 1. Specific highlights are detailed below.
>
> With the issues mentioned in patch 4 fixed, this patchset works fine for
> me. That said I had to disable ASPM by passing pcie_aspm=off on the
> command line, as it is now enabled by default since 6.18-rc1 [1]. At
> this stage, I am not sure if it is an issue with my NVME drive or an
> issue with the controller.
Can you describe what symptoms you had that required you to pass
"pcie_aspm=off" on the kernel command line?
I see these lines in my boot log related to ASPM (and added by
the commit you link to), for both pcie1 and pcie2:
pci 0000:01:00.0: ASPM: DT platform, enabling L0s-up L0s-dw L1 AS
PM-L1.1 ASPM-L1.2 PCI-PM-L1.1 PCI-PM-L1.2
pci 0000:01:00.0: ASPM: DT platform, enabling ClockPM
. . .
nvme nvme0: pci function 0000:01:00.0
nvme 0000:01:00.0: enabling device (0000 -> 0002)
nvme nvme0: allocated 64 MiB host memory buffer (16 segments).
nvme nvme0: 8/0/0 default/read/poll queues
nvme0n1: p1
My NVMe drive on pcie1 works correctly.
https://www.crucial.com/ssd/p3/CT1000P3SSD8
root@...anapif3:~# df /a
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme0n1p1 960302804 32063304 879385040 4% /a
root@...anapif3:~#
I basically want to know if there's something I should do with this
driver to address this. (Mani, can you explain?)
Thank you.
-Alex
> Regards
> Aurelien
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f3ac2ff14834a0aa056ee3ae0e4b8c641c579961
>
Powered by blists - more mailing lists