lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SEZPR06MB57635C8B59C4B0C6053BC1C99054A@SEZPR06MB5763.apcprd06.prod.outlook.com>
Date: Mon, 14 Jul 2025 12:37:03 +0000
From: YH Chung <yh_chung@...eedtech.com>
To: Jeremy Kerr <jk@...econstruct.com.au>, "matt@...econstruct.com.au"
	<matt@...econstruct.com.au>, "andrew+netdev@...n.ch" <andrew+netdev@...n.ch>,
	"davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
	<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>, BMC-SW <BMC-SW@...eedtech.com>
CC: Khang D Nguyen <khangng@...eremail.onmicrosoft.com>
Subject: RE: [PATCH] net: mctp: Add MCTP PCIe VDM transport driver

Hi Jeremy,
Appreciate for the feedback! Please check inline below.

>Hi YH,
>
>[+CC Khang]
>
>I have some overall questions before we get into a full review of the series,
>inline below.
>
>> Add an implementation for DMTF DSP0238 MCTP PCIe VDM transport spec.
>>
>> Introduce struct mctp_pcie_vdm_dev to represent each PCIe VDM
>> interface and its send/receive operations.  Register a net_device with
>> the MCTP core so packets traverse the standard networking socket API.
>>
>> Because there is no generic PCIe VDM bus framework in-tree, this
>> driver provides a transport interface for lower layers to implement
>> vendor-specific read/write callbacks.
>
>Do we really need an abstraction for MCTP VDM drivers? How many are you
>expecting? Can you point us to a client of the VDM abstraction?
>
>There is some value in keeping consistency for the MCTP lladdr formats across
>PCIe transports, but I'm not convinced we need a whole abstraction layer for
>this.
>
We plan to follow existing upstream MCTP transports—such as I²C, I³C, and USB—by abstracting the hardware-specific details into a common interface and focus on the transport binding protocol in this patch. This driver has been tested by our AST2600 and AST2700 MCTP driver.

>> TX path uses a dedicated kernel thread and ptr_ring: skbs queued by
>> the MCTP stack are enqueued on the ring and processed in-thread context.
>
>Is this somehow more suitable than the existing netdev queues?
>
Our current implementation has two operations that take time:
1) Configure the PCIe VDM routing type as DSP0238 requested if we are sending certain ctrl message command codes like Discovery Notify request or Endpoint Discovery response.
2) Update the BDF/EID routing table.

Since the netdev TX queue call-stack should be executed quickly, we offload these items to a dedicated kernel thread and use ptr_ring to process packets in batches.

>> +struct mctp_pcie_vdm_route_info {
>> +       u8 eid;
>> +       u8 dirty;
>> +       u16 bdf_addr;
>> +       struct hlist_node hnode;
>> +};
>
>Why are you keeping your own routing table in the transport driver? We already
>have the route and neighbour tables in the MCTP core code.
>
>Your assumption that you can intercept MCTP control messages to keep a
>separate routing table will not work.
>
We maintain a routing table in the transport driver to record the mapping between BDFs and EIDs, as the BDF is only present in the PCIe VDM header of received Endpoint Discovery Responses. This information is not forwarded to the MCTP core in the MCTP payload. We update the table with this mapping before forwarding the MCTP message to the core.

Additionally, if the MCTP Bus Owner operates in Endpoint (EP) role on the PCIe bus, it cannot obtain the physical addresses of other devices from the PCIe bus. Therefore, recording the physical address in the transport driver may be, in our view, a practical solution to this limitation.

>> +static void mctp_pcie_vdm_net_setup(struct net_device *ndev) {
>> +       ndev->type = ARPHRD_MCTP;
>> +
>> +       ndev->mtu = MCTP_PCIE_VDM_MIN_MTU;
>> +       ndev->min_mtu = MCTP_PCIE_VDM_MIN_MTU;
>> +       ndev->max_mtu = MCTP_PCIE_VDM_MAX_MTU;
>> +       ndev->tx_queue_len =
>MCTP_PCIE_VDM_NET_DEV_TX_QUEUE_LEN;
>> +       ndev->addr_len = 2; //PCIe bdf is 2 bytes
>
>One of the critical things to get right is the lladdr format for PCIe VDM devices,
>as it will be visible to userspace. While the PCIe b/d/fn data is indeed two bytes,
>the MCTP addressing format for VDM data is not entirely representable in two
>bytes: we also have the routing type to encode. DSP0238 requires us to use
>Route to Root Complex and Broadcast from Root Complex for certain messages,
>so we need some way to represent that in your proposed lladdr format.
>
>For this reason, you may want to encode this as [type, bdfn] data.
>
>Also, in flit mode, we also have the segment byte to encode.
>
Agreed. In our implement, we always fill in the "Route By ID" type when core asks us to create the header, since we don't know the correct type to fill at that time.  And later we update the Route type based on the ctrl message code when doing TX. I think it would be nice if we can have a uniformed address format to get the actual Route type by passed-in lladdr when creating the header.

>Cheers,
>
>
>Jeremy
************* Email Confidentiality Notice ********************
免責聲明:
本信件(或其附件)可能包含機密資訊,並受法律保護。如 台端非指定之收件者,請以電子郵件通知本電子郵件之發送者, 並請立即刪除本電子郵件及其附件和銷毀所有複印件。謝謝您的合作!
信驊科技以誠信正直原則永續經營企業,並已委由第三方公正單位勤業眾信及信驊科技獨立董事來管理匿名舉報系統,如各個利害關係人等有發現任何違法及違反公司從業道德、違反法令法規及專業準則、亦或霸凌及違反性別平等之情事,請直接透過以下可選擇匿名之舉報系統舉報,再次感謝您協助信驊持續邁向永續經營。信驊科技舉報網站連結:https://secure.conductwatch.com/aspeedtech/

DISCLAIMER:
This message (and any attachments) may contain legally privileged and/or other confidential information. If you have received it in error, please notify the sender by reply e-mail and immediately delete the e-mail and any attachments without copying or disclosing the contents. Thank you.
ASPEED Technology is committed to sustainable business practices based on integrity and honesty principles. In order to ensure that all information can be openly and transparently communicated, a third-party independent organization, Deloitte and ASPEED Technology's independent directors, have been entrusted to manage the anonymous reporting system. If any stakeholders discover any illegal activities, violations of the company's professional ethics, infringements of laws and regulations, or incidents of bullying and gender inequality, please directly report through the anonymous reporting system provided below. We thank you for your assistance in helping ASPEED Technology continue its journey towards sustainable operations: https://secure.conductwatch.com/aspeedtech/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ