lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: 
 <SJ0PR19MB4415097526AFE55EC0EE2714875A2@SJ0PR19MB4415.namprd19.prod.outlook.com>
Date: Mon, 26 Feb 2024 10:27:40 +0000
From: "Ramaiah, DharmaBhushan" <Dharma.Ramaiah@...l.com>
To: Jeremy Kerr <jk@...econstruct.com.au>,
        "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>,
        "matt@...econstruct.com.au"
	<matt@...econstruct.com.au>
CC: "Rahiman, Shinose" <Shinose.Rahiman@...l.com>
Subject: RE: MCTP - Socket Queue Behavior

Hello Jeremy,

Please find my response below.


Internal Use - Confidential
> -----Original Message-----
> From: Jeremy Kerr <jk@...econstruct.com.au>
> Sent: Wednesday, February 21, 2024 5:23 AM
> To: Ramaiah, DharmaBhushan <Dharma_Ramaiah@...l.com>;
> netdev@...r.kernel.org; matt@...econstruct.com.au
> Cc: Rahiman, Shinose <Shinose_Rahiman@...l.com>
> Subject: Re: MCTP - Socket Queue Behavior
>
>
> [EXTERNAL EMAIL]
>
> Hi Dharma,
>
> > > To be more precise: the i2c bus lock is not held for that entire
> > > duration. The lock will be acquired when the first packet of the
> > > message is transmitted by the i2c transport driver (which may be
> > > after the
> > > sendmsg() has returned) until its reply is received (which may be
> > > before
> > > recvmsg() is called).
> > >
> > From what I understand from the above bus is locked from the point
> > request is picked up for transmission from SKB till response of the
> > packet is received.
>
> That's mostly correct, but:
>
> > If this is case, then messages shall not be interleaved even if
> > multiple application calls multiple sends.
>
> "locking the bus" doesn't do what you're assuming it does there.
>
> When an instance of a transport driver needs to hold the bus over a
> request/response, it does acquire the i2c bus lock. This prevents the mux state
> changes we have been discussing.
>
> However, that same transport driver can still transmit other packets with that
> lock held. This is necessary to allow:
>
>  - transmitting subsequent packets of a multiple-packet message
>  - transmitting packets of other messages to the same endpoint; possibly
>    interleaved with the first message
>  - transmitting packets of other messages to other endpoints that are on
>    the same segment

Basically, interleaving of the messages completely depends on the I2C driver. If lock in the transport driver is designed to block the I2C communication till the existing transaction is complete, then messages shall be serialized. If transport driver does the locking in the way I have mentioned does this in anyway effect the Kernel socket implementation (time out)?

>
> > Since the locking mechanism is implemented by the transport driver
> > (I2C Driver), topology aware I2C driver can lock the other
> > subsegments.  E.g. if a transaction is initiated on the EP X, I2C
> > driver can lock down stream channel 1. Please do correct me if the
> > understanding is correct.
>
> That is generally correct, yes. Typically the mux's parent busses will be locked
> too.
>
> The specific locking depends on the multiplexer implementation, but is
> intended to guarantee that we have the multiplexer configured to allow
> consistent communication on that one segment.
>
> > > An implementation where we attempt to serialise messages to one
> > > particular endpoint would depend on what actual requirements we have
> > > on that endpoint. For example:
> > >
> > >  - is it unable to handle multiple messages of a specific type?
> > >  - is it unable to handle multiple messages of *any* type?
> > >  - is it unable to handle incoming responses when a request is
> > > pending?
> > >
> > > So we'd need a pretty solid use-case to design a solution here; we
> > > have not needed this with any endpoint so far. In your case, I would
> > > take a guess that you could implement this just by limiting the
> > > outstanding messages in userspace.
> > >
> > We have seen a few devices which can handle only one request at a time
> > and not sequencing the command properly can through the EP into a bad
> > state.  And yes this can be controlled in the userspace.
> > Currently we are exploring design options based on what is supported
> > in the Kernel.
>

> OK. There are some potential design options with the tag allocation
> mechanism, and marking specific neighbours with a limit on concurrency, but
> we'd need more details on requirements there. That's probably a separate
> thread, and a fair amount of work to implement.
>
>
> So, if this is manageable in userspace (particularly: you don't need to manage
> concurency across multiple upper-layer protocols), the sockets API is already
> well suited to single-request / single-response interactions.
>
If we can manage concurrency in the Kernel this would provide more design options at the user space, we can discuss this in more detail.

> > > Further, using the i2c bus lock is the wrong mechanism for
> > > serialisation here; we would want this at the MCTP core, likely as
> > > part of the tag allocation process. That would allow serialisation
> > > of messages without dependence on the specifics of the transport
> > > implementation (obviously, the serial and i3c MCTP transport drivers
> > > do not have i2c bus locking!)
> > >
> >
> > Serialization at MCTP core can solve multiple MCTP requests. But if
> > the same bus is shared with Non MCTP devices, bus lock must be from
> > the time request is sent out to reply received.
>
> Why do you need to prevent interactions with *other* devices on the bus?
>
When bus the is shared with multiple devices and requests are interleaved, responses are dropped by the EP points due to bus busy condition.

> Cheers,
>
>
> Jeremy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ