[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6aaffdd81feeb1cf6ff374a534f916410c106cf6.camel@codeconstruct.com.au>
Date: Mon, 26 Feb 2024 22:37:28 +0800
From: Jeremy Kerr <jk@...econstruct.com.au>
To: "Ramaiah, DharmaBhushan" <Dharma.Ramaiah@...l.com>,
"netdev@...r.kernel.org"
<netdev@...r.kernel.org>, "matt@...econstruct.com.au"
<matt@...econstruct.com.au>
Cc: "Rahiman, Shinose" <Shinose.Rahiman@...l.com>
Subject: Re: MCTP - Socket Queue Behavior
Hi Dharma,
> Basically, interleaving of the messages completely depends on the I2C
> driver.
To clarify: it completely depends on the MCTP-over-i2c transport driver
(as opposed to the i2c controller hardware driver). This is the entity
that is acquiring/releasing the i2c bus lock during MCTP packet
transmission.
> If lock in the transport driver is designed to block the I2C
> communication till the existing transaction is complete, then
> messages shall be serialized. If transport driver does the locking in
> the way I have mentioned does this in anyway effect the Kernel socket
> implementation (time out)?
This approach would need some fundamental changes to the core kernel
MCTP support.
The transport drivers do not work on a message granularity; like other
network interface drivers, they deal with individual packets from the
protocol core. The driver has access to hints around which "flows" may
be active (which the i2c driver uses to acquire/release the bus lock),
but in order to implement your proposed message serialisation, the
driver would need to re-order packets back into messages, otherwise we
would be prone to deadlocks.
... then, given MCTP cannot accommodate reordered packets, this would
also be prone to packet loss in all but the simplest of message
transfers.
That is why the MCTP-over-i2c transport's use of the i2c bus lock is
nestable: we need to prevent loss of MUX state, but need to ensure we
can make forward progress given multiple packet streams, and their
expected responses.
> > So, if this is manageable in userspace (particularly: you don't
> > need to manage
> > concurency across multiple upper-layer protocols), the sockets API
> > is already
> > well suited to single-request / single-response interactions.
> >
> If we can manage concurrency in the Kernel this would provide more
> design options at the user space,
Sure, a kernel-based approach would probably make for a cleaner overall
design, and would cater for multiple applications attempting to
communicate without requiring their own means of managing concurrency.
> > Why do you need to prevent interactions with *other* devices on the
> > bus?
> >
> When bus the is shared with multiple devices and requests are
> interleaved, responses are dropped by the EP points due to bus busy
> condition.
So this specific endpoint implementation needs a *completely* quiesced
bus over a request/response transaction? this is a little surprising, as
it restricts a lot of bus behaviours:
- it requires a completely serialised command/response stream
- it can only operate as a responder
- it cannot correctly implement any upper layer protocols that do not
have a strict request-then-response model (there are components in
all of NVMe-MI, PLDM and SPDM that may violate this requirement!)
- it cannot be present on a bus with other endpoints which may
asynchronously send their own MCTP packets, and/or participate in
SMBUs ARP (and so are not serialisable by the kernel's MCTP
behaviour)
Given these limitations, it may be more effective to improve that
endpoint's MCTP/SMBus support, rather than add workarounds to the kernel
MCTP implementation. Of course, I understand this may not always be
feasible, but may make things easier for you in the long term.
Cheers,
Jeremy
Powered by blists - more mailing lists