lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SEZPR06MB5763125EBCAAA4F0C14C939E9051A@SEZPR06MB5763.apcprd06.prod.outlook.com>
Date: Thu, 17 Jul 2025 07:17:43 +0000
From: YH Chung <yh_chung@...eedtech.com>
To: Jeremy Kerr <jk@...econstruct.com.au>, "matt@...econstruct.com.au"
	<matt@...econstruct.com.au>, "andrew+netdev@...n.ch" <andrew+netdev@...n.ch>,
	"davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
	<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>, BMC-SW <BMC-SW@...eedtech.com>
CC: Khang D Nguyen <khangng@...eremail.onmicrosoft.com>
Subject: RE: [PATCH] net: mctp: Add MCTP PCIe VDM transport driver

Hi Jeremy,

>> > Is that one driver (for both 2600 and 2700) or two?
>> >
>> It's written in one file to reuse common functions, but the behavior
>> is slightly different depending on the hardware.
>
>OK, but it's still the one driver (one struct platform_driver handling multiple
>of_device_id matches), and there's only one instance where you're interacting
>with the mctp_pcie_vdm API. So that would be a single user of the proposed
>abstraction.
>> > I'm still not convinced you need an abstraction layer specifically
>> > for VDM transports, especially as you're forcing a specific driver
>> > model with the deferral of TX to a separate thread.
>> >
>> We followed the same implementation pattern as mctp-i2c and mctp-i3c,
>> both of which also abstract the hardware layer via the existing
>> i2c/i3c device interface and use a kernel thread for TX data.
>
>You're combing two concepts there: the use of a workqueue for transmit, and
>the use of a driver abstraction layer between the hardware and the MCTP
>netdev.
>
>Some existing MCTP transport drivers have the deferred TX via workqueue, but
>none have an abstraction layer like you are proposing here.
>
From my perspective, the other MCTP transport drivers do make use of abstraction layers that already exist in the kernel tree. For example, mctp-i3c uses i3c_device_do_priv_xfers(), which ultimately invokes operations registered by the underlying I3C driver. This is effectively an abstraction layer handling the hardware-specific details of TX packet transmission.

In our case, there is no standard interface—like those for I2C/I3C—that serves PCIe VDM. So, the proposed driver tries to introduce a unified interface, defined in mctp-pcie-vdm.h, to provide a reusable interface that allows developers to focus on hardware-specific implementation without needing to duplicate or rework the transport binding logic each time.

Could you kindly share your thoughts or guidance on how the abstraction model used in our PCIe VDM driver compares to the existing abstractions used in I2C/I3C transport implementations?

>In the case of I2C and I3C, we cannot transmit directly from the
>->ndo_start_xmit() op, because i2c/i3c operations are done in a
>sleepable context. Hence the requirement for the defer for those.
>
Understood, thanks for the clarification!

>However, I would be surprised if transmitting via your platform PCIe VDM
>interface would require blocking operations. From what I can see, it can all be
>atomic for your driver. As you say:
>
>> That said, I believe it's reasonable to remove the kernel thread and
>> instead send the packet directly downward after we remove the route
>> table part.
>>
>> Could you kindly help to share your thoughts on which approach might
>> be preferable?
>
>The direct approach would definitely be preferable, if possible.
>
Got it. Then we'll remove the kernel thread and do TX directly.

>Now, given:
>
> 1) you don't need the routing table
> 2) you probably don't need a workqueue
> 3) there is only one driver using the proposed abstraction
>
>- then it sounds like you don't need an abstraction layer at all. Just implement
>your hardware driver to use the netdev operations directly, as a self-contained
>drivers/net/mctp/mctp-pcie-aspeed.c.
>
>> Would you recommend submitting both drivers together in the same patch
>> series for review, or is it preferable to keep them separate?
>
>I would recommend removing the abstraction altogether.
>
>If, for some reason, we do end up needing it, I would prefer they they be
>submitted together. This allows us to review against an actual use-case.
>
Okay, we’ll include it in the next patch series once we’ve reached a conclusion on the abstraction.

>> > OK, so we'd include the routing type in the lladdr data then.
>> >
>> Could you share if there's any preliminary prototype or idea for the
>> format of the lladdr that core plans to implement, particularly
>> regarding how the route type should be encoded or parsed?
>
>Excellent question! I suspect we would want a four-byte representation,
>being:
>
> [0]: routing type (bits 0:2, others reserved)
> [1]: segment (or 0 for non-flit mode)
> [2]: bus
> [3]: device / function
>
>which assumes there is some value in combining formats between flit- and
>non-flit modes. I am happy to adjust if there are better ideas.
>
This looks good to me—thanks for sharing!
>Khang: any inputs from your side there?
>
>Cheers,
>
>
>Jeremy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ