[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SEZPR06MB57634C3876BF0DF92174CDFA9051A@SEZPR06MB5763.apcprd06.prod.outlook.com>
Date: Thu, 17 Jul 2025 08:21:33 +0000
From: YH Chung <yh_chung@...eedtech.com>
To: Jeremy Kerr <jk@...econstruct.com.au>, "matt@...econstruct.com.au"
<matt@...econstruct.com.au>, "andrew+netdev@...n.ch" <andrew+netdev@...n.ch>,
"davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, BMC-SW <BMC-SW@...eedtech.com>
CC: Khang D Nguyen <khangng@...eremail.onmicrosoft.com>
Subject: RE: [PATCH] net: mctp: Add MCTP PCIe VDM transport driver
Hi Jeremy,
>> From my perspective, the other MCTP transport drivers do make use of
>> abstraction layers that already exist in the kernel tree. For example,
>> mctp-i3c uses i3c_device_do_priv_xfers(), which ultimately invokes
>> operations registered by the underlying I3C driver. This is
>> effectively an abstraction layer handling the hardware-specific
>> details of TX packet transmission.
>>
>> In our case, there is no standard interface-like those for
>> I2C/I3C-that serves PCIe VDM.
>
>But that's not what you're proposing here - your abstraction layer serves one
>type of PCIe VDM messaging (MCTP), for only one PCIe VDM MCTP driver.
>
>If you were proposing adding a *generic* PCIe VDM interface, that is suitable
>for all messaging types (not just MCTP), and all PCIe VDM hardware (not just
>ASPEED's) that would make more sense. But I think that would be a much larger
>task than what you're intending here.
>
Agreed. Our proposed interface is intended only for MCTP, and thus not generic for all VDM messages.
>Start small. If we have other use-cases for an abstraction layer, we can
>introduce it at that point - where we have real-world design inputs for it.
>
We're planning to split the MCTP controller driver into two separate drivers for AST2600 and AST2700, removing the AST2600-specific workarounds in the process for improving long-term maintainability. And it's part of the reason we want to decouple the binding protocol logic into its own layer.
Would it be preferable to create a directory such as net/mctp/aspeed/ to host the abstraction layer alongside the hardware-specific drivers?
We're considering this structure to help encapsulate the shared logic and keep the MCTP PCIe VDM-related components organized.
Appreciate any guidance on whether this aligns with the expected upstream organization.
>Regardless, we have worked out that there is nothing to actually abstract
>*anyway*.
>
>> > The direct approach would definitely be preferable, if possible.
>> >
>> Got it. Then we'll remove the kernel thread and do TX directly.
>
>Super!
>
>> > Excellent question! I suspect we would want a four-byte
>> > representation,
>> > being:
>> >
>> > [0]: routing type (bits 0:2, others reserved)
>> > [1]: segment (or 0 for non-flit mode)
>> > [2]: bus
>> > [3]: device / function
>> >
>> > which assumes there is some value in combining formats between flit-
>> > and non-flit modes. I am happy to adjust if there are better ideas.
>> >
>> This looks good to me-thanks for sharing!
>
>No problem! We'll still want a bit of wider consensus on this, because we
>cannot change it once upstreamed.
>
>Cheers,
>
>
>Jeremy
Powered by blists - more mailing lists