lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 20 Feb 2024 14:31:12 +0800
From: Jeremy Kerr <jk@...econstruct.com.au>
To: "Ramaiah, DharmaBhushan" <Dharma.Ramaiah@...l.com>, 
	"netdev@...r.kernel.org"
	 <netdev@...r.kernel.org>, "matt@...econstruct.com.au"
	 <matt@...econstruct.com.au>
Cc: "Rahiman, Shinose" <Shinose.Rahiman@...l.com>
Subject: Re: MCTP - Socket Queue Behavior

Hi Dharma,

> Thanks for the reply. I have few additional queries.

Sure, answers inline.

> > We have no control over reply ordering. It's entirely possible that
> > replies are
> > sent out of sequence by the remote endpoint:
> > 
> >   local application          remote endpoint
> > 
> >   sendmsg(message 1)
> >   sendmsg(message 2)
> >                              receives message 1
> >                              receives message 2
> >                              sends a reply 2 to message 2
> >                              sends a reply 1 to message 1
> >   recvmsg() -> reply 2
> >   recvmsg() -> reply 1
> > 
> 
> Based on the above explanation I understand that the sendto allocates
> the skb (based on the blocking/nonblocking mode). mctp_i2c_tx_thread,
> dequeues the skb and transmits the message. And also sendto can
> interleave the messages on the wire with different message tag. My
> query here regarding the bus lock.
> 
> 1. Is the bus lock taken for the entire duration of sendto and
> revcfrom (as indicated in one of the previous threads).

To be more precise: the i2c bus lock is not held for that entire
duration. The lock will be acquired when the first packet of the message
is transmitted by the i2c transport driver (which may be after the
sendmsg() has returned) until its reply is received (which may be before
recvmsg() is called).


> Assume a case where we have a two EP's (x and y) on I2C bus #1 and
> these EP's are on different segments.

I assume that by "different segments" you mean that they are on
different downstream channels of an i2c multiplexer. Let me know if not.

> In this case, shoudn't the bus be locked for the entire duration till
> we receive the reply or else remote EP might drop the packet as the
> MUX is switched.

Yes, that's what is implemented.

However, I don't think "locking the bus" reflects what you're intending
there: Further packets can be sent, provided that they are on that same
multiplexer channel; current use of the bus lock does not prevent that
(that's how fragmented messages are possible; we need to be able to
transmit the second and subsequent packets).

To oversimplify it a little: holding the bus lock just prevents i2c
accesses that may change the multiplexer state.

>From your diagram:

>  Local application                                  remote endpoint
>  Userspace                           Kernel Space
> 
> sendmsg(msg1)<epX, i2cbus-1, seg1>
> sendmsg(msg2)<epY, i2cbus-1, seg2>

Note that "i2cbus-1, seg1" / "i2cbus-1, seg2" is not how Linux
represents those. You would have something like the following devices in
Linux:

 [bus: i2c1]: the hardware i2c controller
  |
  `-[dev: 1-00xx] i2c mux
     |
     |-[bus: i2c2]: mux downstream channel 1
     |  |
     |  `- endpoint x
     |
     `-[bus: i2c3]: mux downstream channel 2
        |
        `- endpoint y

Then, the MCTP interfaces are attached to one individual bus, so you'd
have the following MCTP interfaces, each corresponding to one of those
Linux i2c devices:

  mctpi2c2: connectivity to endpoint X, via i2c2 (then through i2c1)
  mctpi2c3: connectivity to endpoint Y, via i2c3 (then through i2c1)

- where each of those mctpi2cX interfaces holds it own lock on the bus
when waiting on a reply from a device on that segment.

(you could also have a mctpi2c1, if you have MCTP devices directly
connected to i2c1)

> Also today, MCTP provides no mechanism to advertise if the remote EP
> can handle more than one request at a time. Ability to handle
> multiple messages is purely based on the device capability. In these
> cases shouldn't Kernel provide a way to lock the bus till the
> response is obtained?

Not via that mechanism, no. I think you might be unnecessarily combining
MCTP message concurrency with i2c bus concurrency.

An implementation where we attempt to serialise messages to one
particular endpoint would depend on what actual requirements we have on
that endpoint. For example:

 - is it unable to handle multiple messages of a specific type?
 - is it unable to handle multiple messages of *any* type?
 - is it unable to handle incoming responses when a request is pending?
 
So we'd need a pretty solid use-case to design a solution here; we have
not needed this with any endpoint so far. In your case, I would take a
guess that you could implement this just by limiting the outstanding
messages in userspace.

Further, using the i2c bus lock is the wrong mechanism for serialisation
here; we would want this at the MCTP core, likely as part of the tag
allocation process. That would allow serialisation of messages without
dependence on the specifics of the transport implementation (obviously,
the serial and i3c MCTP transport drivers do not have i2c bus locking!)

Cheers,


Jeremy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ